


default search action
5th L4DC 2023: Philadelphia, PA, USA
- Nikolai Matni, Manfred Morari, George J. Pappas:

Learning for Dynamics and Control Conference, L4DC 2023, 15-16 June 2023, Philadelphia, PA, USA. Proceedings of Machine Learning Research 211, PMLR 2023 - Karthik Elamvazhuthi, Xuechen Zhang, Samet Oymak, Fabio Pasqualetti:

Learning on Manifolds: Universal Approximations Properties using Geometric Controllability Conditions for Neural ODEs. 1-11 - Saber Jafarpour

, Akash Harapanahalli, Samuel Coogan:
Interval Reachability of Nonlinear Dynamical Systems with Neural Network Controllers. 12-25 - Adithya Ramesh, Balaraman Ravindran:

Physics-Informed Model-Based Reinforcement Learning. 26-37 - Bilgehan Sel, Ahmad Al-Tawaha, Yuhao Ding, Ruoxi Jia, Bo Ji, Javad Lavaei, Ming Jin:

Learning-to-Learn to Guide Random Search: Derivative-Free Meta Blackbox Optimization on Manifold. 38-50 - Yi Tian, Kaiqing Zhang, Russ Tedrake, Suvrit Sra:

Can Direct Latent Model Learning Solve Linear Quadratic Gaussian Control? 51-63 - Pengzhi Yang

, Shumon Koga, Arash Asgharivaskasi, Nikolay Atanasov:
Policy Learning for Active Target Tracking over Continuous SE(3) Trajectories. 64-75 - Kaustubh Sridhar, Souradeep Dutta, James Weimer, Insup Lee:

Guaranteed Conformance of Neurosymbolic Models to Natural Constraints. 76-89 - Kai-Chieh Hsu, Duy Phuong Nguyen, Jaime Fernández Fisac:

ISAACS: Iterative Soft Adversarial Actor-Critic for Safety. 90-103 - Yikun Cheng, Pan Zhao, Naira Hovakimyan:

Safe and Efficient Reinforcement Learning using Disturbance-Observer-Based Control Barrier Functions. 104-115 - Xunbi A. Ji, Gábor Orosz:

Learning the dynamics of autonomous nonlinear delay systems. 116-127 - Yaofeng Desmond Zhong, Jiequn Han, Biswadip Dey, Georgia Olympia Brikis:

Improving Gradient Computation for Differentiable Physics Simulation with Contacts. 128-141 - Orhan Eren Akgün, Arif Kerem Dayi, Stephanie Gil, Angelia Nedich:

Learning Trust Over Directed Graphs in Multiagent Systems. 142-154 - Kyle Beltran Hatch, Benjamin Eysenbach, Rafael Rafailov, Tianhe Yu, Ruslan Salakhutdinov, Sergey Levine, Chelsea Finn:

Contrastive Example-Based Control. 155-169 - Sheng Cheng, Lin Song, Minkyung Kim, Shenlong Wang, Naira Hovakimyan:

DiffTune+: Hyperparameter-Free Auto-Tuning using Auto-Differentiation. 170-183 - Sarper Aydin, Ceyhun Eksin:

Policy Gradient Play with Networked Agents in Markov Potential Games. 184-195 - Serban Sabau, Yifei Zhang, Sourav Kumar Ukil:

Sample Complexity Bound for Evaluating the Robust Observer's Performance under Coprime Factors Uncertainty. 196-207 - Keyan Miao, Konstantinos Gatsis:

Learning Robust State Observers using Neural ODEs. 208-219 - Rajiv Sambharya, Georgina Hall, Brandon Amos, Bartolomeo Stellato:

End-to-End Learning to Warm-Start for Real-Time Quadratic Optimization. 220-234 - Tejas Pagare, Vivek S. Borkar, Konstantin Avrachenkov:

Full Gradient Deep Reinforcement Learning for Average-Reward Criterion. 235-247 - Yitian Chen

, Timothy L. Molloy, Tyler H. Summers, Iman Shames:
Regret Analysis of Online LQR Control via Trajectory Prediction and Tracking. 248-258 - Yecheng Jason Ma, Kausik Sivakumar, Jason Yan, Osbert Bastani, Dinesh Jayaraman:

Learning Policy-Aware Models for Model-Based Reinforcement Learning via Transition Occupancy Matching. 259-271 - Songyuan Zhang, Yumeng Xiu, Guannan Qu, Chuchu Fan:

Compositional Neural Certificates for Networked Dynamical Systems. 272-285 - Fernando Castañeda, Haruki Nishimura, Rowan Thomas McAllister, Koushil Sreenath, Adrien Gaidon:

In-Distribution Barrier Functions: Self-Supervised Policy Filters that Avoid Out-of-Distribution States. 286-299 - Anushri Dixit

, Lars Lindemann, Skylar X. Wei, Matthew Cleaveland, George J. Pappas, Joel W. Burdick:
Adaptive Conformal Prediction for Motion Planning among Dynamic Agents. 300-314 - Dongsheng Ding, Xiaohan Wei, Zhuoran Yang, Zhaoran Wang, Mihailo R. Jovanovic:

Provably Efficient Generalized Lagrangian Policy Optimization for Safe Multi-Agent Reinforcement Learning. 315-332 - Yan Jiang, Wenqi Cui, Baosen Zhang, Jorge Cortés:

Equilibria of Fully Decentralized Learning in Networked Systems. 333-345 - Luke Bhan, Yuanyuan Shi, Miroslav Krstic:

Operator Learning for Nonlinear Adaptive Control. 346-357 - Zhuoyuan Wang, Yorie Nakahira:

A Generalizable Physics-informed Learning Framework for Risk Probability Estimation. 358-370 - Wenqi Cui, Linbin Huang, Weiwei Yang, Baosen Zhang:

Efficient Reinforcement Learning Through Trajectory Generation. 371-382 - Muhammad Abdullah Naeem:

Concentration Phenomenon for Random Dynamical Systems: An Operator Theoretic Approach. 383-394 - Yashaswini Murthy, Mehrdad Moharrami, R. Srikant:

Modified Policy Iteration for Exponential Cost Risk Sensitive MDPs. 395-406 - Taha Entesari, Mahyar Fazlyab:

Automated Reachability Analysis of Neural Network-Controlled Systems via Adaptive Polytopes. 407-419 - Lauren E. Conger, Sydney Vernon, Eric Mazumdar:

Designing System Level Synthesis Controllers for Nonlinear Systems with Stability Guarantees. 420-430 - Kaiyuan Tan, Jun Wang, Yiannis Kantaros:

Targeted Adversarial Attacks against Neural Network Trajectory Predictors. 431-444 - Xiaobing Dai, Armin Lederer, Zewen Yang

, Sandra Hirche:
Can Learning Deteriorate Control? Analyzing Computational Delays in Gaussian Process-Based Event-Triggered Online Learning. 445-457 - Paul Griffioen

, Alex Devonport, Murat Arcak:
Probabilistic Invariance for Gaussian Process State Space Models. 458-468 - Sampada Deglurkar, Michael H. Lim, Johnathan Tucker, Zachary N. Sunberg, Aleksandra Faust, Claire J. Tomlin:

Compositional Learning-based Planning for Vision POMDPs. 469-482 - Tianqi Cui, Thomas Bertalan, George J. Pappas, Manfred Morari, Yannis G. Kevrekidis, Mahyar Fazlyab:

Certified Invertibility in Neural Networks via Mixed-Integer Programming. 483-496 - Spencer Hutchinson, Berkay Turan, Mahnoosh Alizadeh:

The Impact of the Geometric Properties of the Constraint Set in Safe Optimization with Bandit Feedback. 497-508 - Guillaume O. Berger, Sriram Sankaranarayanan:

Template-Based Piecewise Affine Regression. 509-520 - Thomas Beckers, Qirui Wu, George J. Pappas:

Physics-enhanced Gaussian Process Variational Autoencoder. 521-533 - Leilei Cui, Tamer Basar, Zhong-Ping Jiang:

A Reinforcement Learning Look at Risk-Sensitive Linear Quadratic Gaussian Control. 534-546 - Erfan Aasi, Mingyu Cai, Cristian Ioan Vasile, Calin Belta:

Time-Incremental Learning of Temporal Logic Classifiers Using Decision Trees. 547-559 - Paula Gradu, Elad Hazan, Edgar Minasyan:

Adaptive Regret for Control of Time-Varying Dynamics. 560-572 - Zihao Zhou, Rose Yu:

Automatic Integration for Fast and Interpretable Neural Point Processes. 573-585 - Thomas T. C. K. Zhang, Katie Kang, Bruce D. Lee, Claire J. Tomlin, Sergey Levine, Stephen Tu, Nikolai Matni:

Multi-Task Imitation Learning for Linear Dynamical Systems. 586-599 - Srinath Tankasala, Mitch Pryor:

Accelerating Trajectory Generation for Quadrotors Using Transformers. 600-611 - Yaqi Duan, Martin J. Wainwright:

A finite-sample analysis of multi-step temporal difference estimates. 612-624 - Swaminathan Gurumurthy, Zachary Manchester, J. Zico Kolter:

Practical Critic Gradient based Actor Critic for On-Policy Reinforcement Learning. 625-638 - Swaminathan Gurumurthy, J. Zico Kolter, Zachary Manchester:

Deep Off-Policy Iterative Learning Control. 639-652 - Muhammad Abdullah Naeem, Miroslav Pajic:

Transportation-Inequalities, Lyapunov Stability and Sampling for Dynamical Systems on Continuous State Space. 653-664 - Prithvi Akella, Skylar X. Wei, Joel W. Burdick, Aaron D. Ames:

Learning Disturbances Online for Risk-Aware Control: Risk-Aware Flight with Less Than One Minute of Data. 665-678 - Cyrus Neary, Ufuk Topcu:

Compositional Learning of Dynamical System Models Using Port-Hamiltonian Neural Networks. 679-691 - Yuyang Zhang, Runyu Zhang, Yuantao Gu, Na Li:

Multi-Agent Reinforcement Learning with Reward Delays. 692-704 - Wenliang Liu, Kevin Leahy, Zachary Serlin, Calin Belta:

CatlNet: Learning Communication and Coordination Policies from CaTL+ Specifications. 705-717 - Luigi Campanaro, Daniele De Martini, Siddhant Gangapurwala, Wolfgang Merkt, Ioannis Havoutis:

Roll-Drop: accounting for observation noise with a single parameter. 718-730 - Valentin Duruisseaux, Thai Duong, Melvin Leok, Nikolay Atanasov:

Lie Group Forced Variational Integrator Networks for Learning and Control of Robot Systems. 731-744 - Armand Comas Massague, Christian Fernandez Lopez, Sandesh Ghimire, Haolin Li, Mario Sznaier, Octavia I. Camps:

Learning Object-Centric Dynamic Modes from Video and Emerging Properties. 745-769 - Yuxiang Yang, Xiangyun Meng, Wenhao Yu, Tingnan Zhang, Jie Tan, Byron Boots:

Continuous Versatile Jumping Using Learned Action Residuals. 770-782 - Weiye Zhao, Tairan He, Changliu Liu:

Probabilistic Safeguard for Reinforcement Learning Using Safety Index Guided Gaussian Process Models. 783-796 - An T. Le, Kay Hansel, Jan Peters, Georgia Chalvatzaki:

Hierarchical Policy Blending As Optimal Transport. 797-812 - Xu Zhang, Marcos M. Vasconcelos:

Top-k data selection via distributed sample quantile inference. 813-824 - Harrison Delecki, Anthony Corso, Mykel J. Kochenderfer:

Model-based Validation as Probabilistic Inference. 825-837 - Tanya Veeravalli, Maxim Raginsky:

Nonlinear Controllability and Function Representation by Neural Stochastic Differential Equations. 838-850 - Saminda Abeyruwan, Alex Bewley, Nicholas Matthew Boffi, Krzysztof Marcin Choromanski, David B. D'Ambrosio, Deepali Jain, Pannag R. Sanketi, Anish Shankar, Vikas Sindhwani, Sumeet Singh, Jean-Jacques E. Slotine, Stephen Tu:

Agile Catching with Whole-Body MPC and Blackbox Policy Learning. 851-863 - Kehan Long, Yinzhuang Yi, Jorge Cortés, Nikolay Atanasov:

Distributionally Robust Lyapunov Function Search Under Uncertainty. 864-877 - Jan Achterhold

, Philip Tobuschat, Hao Ma, Dieter Büchler, Michael Muehlebach, Joerg Stueckler:
Black-Box vs. Gray-Box: A Case Study on Learning Table Tennis Ball Trajectory Prediction with Spin and Impacts. 878-890 - Adrien Banse, Licio Romao, Alessandro Abate, Raphaël M. Jungers:

Data-driven memory-dependent abstractions of dynamical systems. 891-902 - SooJean Han, Soon-Jo Chung, Johanna Gustafson:

Congestion Control of Vehicle Traffic Networks by Learning Structural and Temporal Patterns. 903-914 - Xiyu Deng, Christian Kurniawan

, Adhiraj Chakraborty, Assane Gueye, Niangjun Chen, Yorie Nakahira:
A Learning and Control Perspective for Microfinance. 915-927 - Reza Khodayi-mehr, Pingcheng Jian, Michael M. Zavlanos:

Physics-Guided Active Learning of Environmental Flow Fields. 928-940 - Francesco De Lellis, Marco Coraggio, Giovanni Russo, Mirco Musolesi, Mario di Bernardo:

CT-DQN: Control-Tutored Deep Reinforcement Learning. 941-953 - Panagiotis Vlantis, Leila Bridgeman, Michael M. Zavlanos:

Failing with Grace: Learning Neural Network Controllers that are Boundedly Unsafe. 954-965 - Joshua Pilipovsky, Vignesh Sivaramakrishnan, Meeko Oishi, Panagiotis Tsiotras:

Probabilistic Verification of ReLU Neural Networks via Characteristic Functions. 966-979 - Guanru Pan, Ruchuan Ou, Timm Faulwasser:

Data-driven Stochastic Output-Feedback Predictive Control: Recursive Feasibility through Interpolated Initial Conditions. 980-992 - Rishi Rani, Massimo Franceschetti:

Detection of Man-in-the-Middle Attacks in Model-Free Reinforcement Learning. 993-1007 - Zhaolin Ren, Yang Zheng, Maryam Fazel, Na Li:

On Controller Reduction in Linear Quadratic Gaussian Control with Performance Bounds. 1008-1019 - Deepan Muthirayan, Chinmay Maheshwari

, Pramod P. Khargonekar, Shankar S. Sastry:
Competing Bandits in Time Varying Matching Markets. 1020-1031 - Xinyi Chen, Edgar Minasyan, Jason D. Lee, Elad Hazan:

Regret Guarantees for Online Deep Control. 1032-1045 - Alex Devonport, Peter Seiler, Murat Arcak:

Frequency Domain Gaussian Process Models for H∞ Uncertainties. 1046-1057 - Sydney Dolan, Siddharth Nayak, Hamsa Balakrishnan:

Satellite Navigation and Coordination with Limited Information Sharing. 1058-1071 - Lukas Kesper, Sebastian Trimpe, Dominik Baumann:

Toward Multi-Agent Reinforcement Learning for Distributed Event-Triggered Control. 1072-1085 - Alessio Russo:

Analysis and Detectability of Offline Data Poisoning Attacks on Linear Dynamical Systems. 1086-1098 - Tsun-Hsuan Wang, Wei Xiao, Makram Chahine, Alexander Amini, Ramin M. Hasani, Daniela Rus:

Learning Stability Attention in Vision-based End-to-end Driving Policies. 1099-1111 - Arnob Ghosh:

Provably Efficient Model-free RL in Leader-Follower MDP with Linear Function Approximation. 1112-1124 - Kong Yao Chee, M. Ani Hsieh, Nikolai Matni:

Learning-enhanced Nonlinear Model Predictive Control using Knowledge-based Neural Ordinary Differential Equations and Deep Ensembles. 1125-1137 - Yingying Li, James A. Preiss, Na Li, Yiheng Lin, Adam Wierman, Jeff S. Shamma:

Online switching control with stability and regret guarantees. 1138-1151 - Elie Aljalbout, Maximilian Karl, Patrick van der Smagt:

CLAS: Coordinating Multi-Robot Manipulation with Central Latent Action Spaces. 1152-1166 - Hancheng Min, Enrique Mallada:

Learning Coherent Clusters in Weakly-Connected Network Systems. 1167-1179 - Antoine Leeman, Johannes Köhler

, Samir Bennani, Melanie N. Zeilinger:
Predictive safety filter using system level synthesis. 1180-1192 - Rahel Rickenbach, Elena Arcari, Melanie N. Zeilinger:

Time Dependent Inverse Optimal Control using Trigonometric Basis Functions. 1193-1204 - Daniel Tabas, Ahmed S. Zamzam, Baosen Zhang:

Interpreting Primal-Dual Algorithms for Constrained Multiagent Reinforcement Learning. 1205-1217 - Majid Khadiv, Avadesh Meduri, Huaijiang Zhu, Ludovic Righetti, Bernhard Schölkopf:

Learning Locomotion Skills from MPC in Sensor Space. 1218-1230 - Sophia Huiwen Sun, Robin Walters, Jinxi Li, Rose Yu:

Probabilistic Symmetry for Multi-Agent Dynamics. 1231-1244 - Zifan Wang, Yulong Gao, Siyi Wang, Michael M. Zavlanos, Alessandro Abate, Karl Henrik Johansson:

Policy Evaluation in Distributional LQR. 1245-1256 - Nick-Marios T. Kokolakis, Kyriakos G. Vamvoudakis, Wassim M. Haddad:

Reachability Analysis-based Safety-Critical Control using Online Fixed-Time Reinforcement Learning. 1257-1270 - Tahiya Salam, Alice Kate Li, M. Ani Hsieh:

Online Estimation of the Koopman Operator Using Fourier Features. 1271-1283 - Tobias Enders, James Harrison, Marco Pavone, Maximilian Schiffer:

Hybrid Multi-agent Deep Reinforcement Learning for Autonomous Mobility on Demand Systems. 1284-1296 - Doumitrou Daniil Nimara, Mohammadreza Malek-Mohammadi, Petter Ögren, Jieqiang Wei, Vincent Huang:

Model-Based Reinforcement Learning for Cavity Filter Tuning. 1297-1307 - Han Wang, Leonardo Felipe Toso, James Anderson:

FedSysID: A Federated Approach to Sample-Efficient System Identification. 1308-1320 - Patricia Pauli, Dennis Gramlich, Frank Allgöwer:

Lipschitz constant estimation for 1D convolutional neural networks. 1321-1332 - Hengquan Guo, Zhu Qi, Xin Liu:

Rectified Pessimistic-Optimistic Learning for Stochastic Continuum-armed Bandit with Constraints. 1333-1344 - Gautam Goel, Naman Agarwal, Karan Singh, Elad Hazan:

Best of Both Worlds in Online Control: Competitive Ratio and Policy Regret. 1345-1356 - Ian Char, Joseph Abbate, Laszlo Bardoczi, Mark D. Boyer, Youngseog Chung, Rory Conlin, Keith Erickson, Viraj Mehta, Nathan Richner, Egemen Kolemen, Jeff G. Schneider:

Offline Model-Based Reinforcement Learning for Tokamak Control. 1357-1372 - Tong Guanchun, Michael Muehlebach:

A Dynamical Systems Perspective on Discrete Optimization. 1373-1386 - Aritra Mitra, Hamed Hassani, George J. Pappas:

Linear Stochastic Bandits over a Bit-Constrained Channel. 1387-1399 - Yue Meng, Chuchu Fan:

Hybrid Systems Neural Control with Region-of-Attraction Planner. 1400-1415 - Killian Reed Wood, Emiliano Dall'Anese:

Online Saddle Point Tracking with Decision-Dependent Data. 1416-1428 - Bence Zsombor Hadlaczky, Noémi Friedman, Béla Takarics, Bálint Vanek:

Wing shape estimation with Extended Kalman filtering and KalmanNet neural network of a flexible wing aircraft. 1429-1440 - Baris Kayalibay, Atanas Mirchev, Ahmed Agha, Patrick van der Smagt, Justin Bayer:

Filter-Aware Model-Predictive Control. 1441-1454 - Alireza Farahmandi, Brian C. Reitz, Mark J. Debord, Douglas Philbrick, Katia Estabridis, Gary A. Hewer:

Hyperparameter Tuning of an Off-Policy Reinforcement Learning Algorithm for H∞ Tracking Control. 1455-1466 - Sourya Dey, Eric William Davis:

DLKoopman: A deep learning software package for Koopman theory. 1467-1479 - Michelle Guo, Yifeng Jiang, Andrew Everett Spielberg, Jiajun Wu, C. Karen Liu:

Benchmarking Rigid Body Contact Models. 1480-1492 - Kwangjun Ahn, Zakaria Mhammedi, Horia Mania, Zhang-Wei Hong, Ali Jadbabaie:

Model Predictive Control via On-Policy Imitation Learning. 1493-1505

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID














