


default search action
20th AAMAS 2021: Virtual Event, UK
- Frank Dignum, Alessio Lomuscio, Ulle Endriss, Ann Nowé:

AAMAS '21: 20th International Conference on Autonomous Agents and Multiagent Systems, Virtual Event, United Kingdom, May 3-7, 2021. ACM 2021, ISBN 978-1-4503-8307-3
Blue Sky Ideas Track
- Niclas Boehmer, Rolf Niedermeier:

Broadening the Research Agenda for Computational Social Choice: Multiple Preference Profiles and Multiple Solutions. 1-5 - Gabriel Istrate:

Models We Can Trust: Toward a Systematic Discipline of (Agent-Based) Model Interpretation and Validation. 6-11 - Amol Kelkar:

Cognitive Homeostatic Agents. 12-16 - Jeffrey O. Kephart:

Multi-modal Agents for Business Intelligence. 17-22 - Alexander Mey, Frans A. Oliehoek:

Environment Shift Games: Are Multiple Agents the Solution, and not the Problem, to Non-Stationarity? 23-27 - Reuth Mirsky, Peter Stone:

The Seeing-Eye Robot Grand Challenge: Rethinking Automated Care. 28-33 - Decebal Constantin Mocanu, Elena Mocanu, Tiago Pinto, Selima Curci, Phuong H. Nguyen, Madeleine Gibescu, Damien Ernst, Zita A. Vale:

Sparse Training Theory for Scalable and Efficient Agents. 34-38 - Gauthier Picard, Clément Caron, Jean-Loup Farges, Jonathan Guerra, Cédric Pralet, Stéphanie Roussel:

Autonomous Agents and Multiagent Systems Challenges in Earth Observation Satellite Constellations. 39-44 - Avi Rosenfeld:

Better Metrics for Evaluating Explainable Artificial Intelligence. 45-50 - Yaodong Yang, Jun Luo, Ying Wen, Oliver Slumbers, Daniel Graves, Haitham Bou-Ammar, Jun Wang, Matthew E. Taylor:

Diverse Auto-Curriculum is Critical for Successful Real-World Multiagent Learning Systems. 51-56 - Vahid Yazdanpanah, Enrico H. Gerding, Sebastian Stein, Mehdi Dastani, Catholijn M. Jonker, Timothy J. Norman:

Responsibility Research for Trustworthy Autonomous Systems. 57-62 - Dengji Zhao:

Mechanism Design Powered by Social Interactions. 63-67
Main Track
- Amal Abdulrahman, Deborah Richards, Ayse Aysin Bilgin:

Reason Explanation for Encouraging Behaviour Change Intention. 68-77 - Kenshi Abe, Yusuke Kaneko:

Off-Policy Exploitability-Evaluation in Two-Player Zero-Sum Markov Games. 78-87 - Ramin Ahadi, Wolfgang Ketter, John Collins, Nicolò Daina:

Siting and Sizing of Charging Infrastructure for Shared Autonomous Electric Fleets. 88-96 - Lucas Nunes Alegre, Ana L. C. Bazzan, Bruno C. da Silva:

Minimum-Delay Adaptation in Non-Stationary Reinforcement Learning via Online High-Confidence Change-Point Detection. 97-105 - Andrea Aler Tubella, Andreas Theodorou, Juan Carlos Nieves:

Interrogating the Black Box: Transparency through Information-Seeking Dialogues. 106-114 - Nicolas Anastassacos, Julian García, Stephen Hailes, Mirco Musolesi:

Cooperation and Reputation Dynamics with Reinforcement Learning. 115-123 - Siddharth Aravindan, Wee Sun Lee:

State-Aware Variational Thompson Sampling for Deep Q-Networks. 124-132 - Haris Aziz, Hau Chan, Ágnes Cseh, Bo Li, Fahimeh Ramezani

, Chenhao Wang:
Multi-Robot Task Allocation-Complexity and Approximation. 133-141 - Matteo Baldoni, Cristina Baroglio, Roberto Micalizio, Stefano Tedeschi:

Robustness Based on Accountability in Multiagent Organizations. 142-150 - Jacques Bara, Omer Lev, Paolo Turrini:

Predicting Voting Outcomes in Presence of Communities. 151-159 - Eugenio Bargiacchi, Timothy Verstraeten, Diederik M. Roijers:

Cooperative Prioritized Sweeping. 160-168 - Siddharth Barman, Paritosh Verma:

Existence and Computation of Maximin Fair Allocations Under Matroid-Rank Valuations. 169-177 - Dorothea Baumeister, Tobias Alexander Hogrebe:

Complexity of Scheduling and Predicting Round-Robin Tournaments. 178-186 - Dorothea Baumeister, Linus Boes, Robin Weishaupt:

Complexity of Sequential Rules in Judgment Aggregation. 187-195 - Ryan Beal, Georgios Chalkiadakis, Timothy J. Norman, Sarvapali D. Ramchurn:

Optimising Long-Term Outcomes using Real-World Fluent Objectives: An Application to Football. 196-204 - Ondrej Biza, Dian Wang, Robert Platt Jr., Jan-Willem van de Meent, Lawson L. S. Wong:

Action Priors for Large Action Spaces in Robotics. 205-213 - Sirin Botan, Ronald de Haan, Marija Slavkovik, Zoi Terzopoulou:

Egalitarian Judgment Aggregation. 214-222 - Sirin Botan:

Manipulability of Thiele Methods on Party-List Profiles. 223-231 - Fabien Boucaud, Catherine Pelachaud, Indira Thouvenin:

Decision Model for a Virtual Agent that can Touch and be Touched. 232-241 - Yasser Bourahla, Manuel Atencia, Jérôme Euzenat:

Knowledge Improvement and Diversity under Interaction-Driven Adaptation of Learned Ontologies. 242-250 - Felix Brandt, Martin Bullinger, Patrick Lederer:

On the Indecisiveness of Kelly-Strategyproof Social Choice Functions. 251-259 - Robert Bredereck, Aleksander Figiel, Andrzej Kaczmarczyk, Dusan Knop, Rolf Niedermeier:

High-Multiplicity Fair Allocation Made More Practical. 260-268 - Federico Cacciamani, Andrea Celli, Marco Ciccone, Nicola Gatti:

Multi-Agent Coordination in Adversarial Environments through Signal Mediated Strategies. 269-278 - Xin-Qiang Cai, Yao-Xiang Ding, Yuan Jiang, Zhi-Hua Zhou:

Imitation Learning from Pixel-Level Demonstrations by HashReward. 279-287 - Pierre Cardi, Laurent Gourvès, Julien Lesca:

Worst-case Bounds for Spending a Common Budget. 288-296 - Vishal Chakraborty, Phokion G. Kolaitis:

Classifying the Complexity of the Possible Winner Problem on Partial Chains. 297-305 - Rahul Chandan, Dario Paccagnan, Jason R. Marden:

Tractable Mechanisms for Computing Near-Optimal Utility Functions. 306-313 - Kangjie Chen, Shangwei Guo, Tianwei Zhang, Shuxin Li, Yang Liu:

Temporal Watermarks for Deep Reinforcement Learning Models. 314-322 - Lin Chen, Lei Xu, Zhimin Gao, Ahmed Imtiaz Sunny, Keshav Kasichainula, Weidong Shi:

A Game Theoretical Analysis of Non-Linear Blockchain System. 323-331 - Mingxi Cheng, Chenzhong Yin, Junyao Zhang, Shahin Nazarian, Jyotirmoy Deshmukh, Paul Bogdan:

A General Trust Framework for Multi-Agent Systems. 332-340 - Shushman Choudhury, Jayesh K. Gupta, Peter Morales, Mykel J. Kochenderfer:

Scalable Anytime Planning for Multi-Agent MDPs. 341-349 - Serafino Cicerone, Alessia Di Fonso, Gabriele Di Stefano, Alfredo Navarra:

MOBLOT: Molecular Oblivious Robots. 350-358 - Saar Cohen, Noa Agmon:

Spatial Consensus-Prevention in Robotic Swarms. 359-367 - Rodica Condurache, Catalin Dima, Youssouf Oualhadj, Nicolas Troquard:

Rational Synthesis in the Commons with Careless and Careful Agents. 368-376 - Elena Congeduti, Alexander Mey, Frans A. Oliehoek:

Loss Bounds for Approximate Influence-Based Abstraction. 377-385 - Jiaxun Cui, William Macke, Harel Yedidsion, Aastha Goyal, Daniel Urieli, Peter Stone:

Scalable Multiagent Driving Policies for Reducing Traffic Congestion. 386-394 - Panayiotis Danassis, Zeki Doruk Erden, Boi Faltings:

Improved Cooperation by Exploiting a Common Signal. 395-403 - Dave de Jonge, Filippo Bistaffa, Jordi Levy

:
A Heuristic Algorithm for Multi-Agent Vehicle Routing with Automated Negotiation. 404-412 - Argyrios Deligkas, Themistoklis Melissourgos, Paul G. Spirakis:

Walrasian Equilibria in Markets with Small Demands. 413-419 - Chuang Deng, Zhihai Rong, Lin Wang, Xiaofan Wang:

Modeling Replicator Dynamics in Stochastic Games Using Markov Chain Method. 420-428 - Louise A. Dennis, Nir Oren:

Explaining BDI Agent Behaviour through Dialogue. 429-437 - Palash Dey, Suman Kalyan Maity, Sourav Medya, Arlei Silva:

Network Robustness via Global k-cores. 438-446 - Zehao Dong, Sanmay Das, Patrick J. Fowler, Chien-Ju Ho:

Efficient Nonmyopic Online Allocation of Scarce Reusable Resources. 447-455 - Yali Du, Bo Liu, Vincent Moens, Ziqi Liu, Zhicheng Ren, Jun Wang, Xu Chen, Haifeng Zhang:

Learning Correlated Communication Topology in Multi-Agent Reinforcement learning. 456-464 - Miroslav Dudík, Xintong Wang, David M. Pennock, David M. Rothschild:

Log-time Prediction Markets for Interval Securities. 465-473 - Pierre El Mqirmi, Francesco Belardinelli, Borja G. León:

An Abstraction-based Method to Check Multi-Agent Deep Reinforcement-Learning Behaviors. 474-482 - Ingy Elsayed-Aly, Suda Bharadwaj, Christopher Amato, Rüdiger Ehlers

, Ufuk Topcu, Lu Feng:
Safe Multi-Agent Reinforcement Learning via Shielding. 483-491 - Hélène Fargier, Jérôme Mengin:

A Knowledge Compilation Map for Conditional Preference Statements-based Languages. 492-500 - Johan Ferret, Olivier Pietquin, Matthieu Geist:

Self-Imitation Advantage Learning. 501-509 - Alina Filimonov, Reshef Meir:

Strategyproof Facility Location Mechanisms on Discrete Trees. 510-518 - Fabrice Gaignier, Yannis Dimopoulos, Jean-Guy Mailly, Pavlos Moraitis:

Probabilistic Control Argumentation Frameworks. 519-527 - Rustam Galimullin, Thomas Ågotnes:

Quantified Announcements and Common Knowledge. 528-536 - Sriram Ganapathi Subramanian, Matthew E. Taylor, Mark Crowley, Pascal Poupart:

Partially Observable Mean Field Reinforcement Learning. 537-545 - Anis Gargouri, Sébastien Konieczny, Pierre Marquis, Srdjan Vesic:

On a Notion of Monotonic Support for Bipolar Argumentation Frameworks. 546-554 - Siddharth Gupta, Meirav Zehavi:

Multivariate Analysis of Scheduling Fair Competitions. 555-564 - Vaibhav Gupta, Daksh Anand, Praveen Paruchuri, Akshat Kumar:

Action Selection for Composable Modular Deep Reinforcement Learning. 565-573 - Lewis Hammond, James Fox, Tom Everitt, Alessandro Abate, Michael J. Wooldridge:

Equilibrium Refinements for Multi-Agent Influence Diagrams: Theory and Practice. 574-582 - Lewis Hammond, Alessandro Abate, Julian Gutierrez, Michael J. Wooldridge:

Multi-Agent Reinforcement Learning with Temporal Logic Specifications. 583-592 - Paul Harrenstein, Grzegorz Lisowski, Ramanujan Sridharan, Paolo Turrini:

A Hotelling-Downs Framework for Party Nominees. 593-601 - Keyang He, Bikramjit Banerjee, Prashant Doshi:

Cooperative-Competitive Reinforcement Learning with History-Dependent Rewards. 602-610 - Taoan Huang, Bistra Dilkina, Sven Koenig:

Learning Node-Selection Strategies in Bounded-Suboptimal Conflict-Based Search for Multi-Agent Path Finding. 611-619 - Léonard Hussenot, Robert Dadashi, Matthieu Geist, Olivier Pietquin:

Show Me the Way: Intrinsic Motivation from Demonstrations. 620-628 - Ercument Ilhan, Jeremy Gow, Diego Perez Liebana:

Action Advising with Advice Imitation in Deep Reinforcement Learning. 629-637 - Aviram Imber, Benny Kimelfeld:

Computing the Extremal Possible Ranks with Incomplete Preferences. 638-646 - Aviram Imber, Benny Kimelfeld:

Probabilistic Inference of Winners in Elections by Independent Random Voters. 647-655 - Katsuya Ito, Kentaro Minami, Kentaro Imajo, Kei Nakagawa:

Trader-Company Method: A Metaheuristics for Interpretable Stock Price Prediction. 656-664 - Pallavi Jain, Nimrod Talmon, Laurent Bulteau:

Partition Aggregation for Participatory Budgeting. 665-673 - Zhengyao Jiang, Pasquale Minervini, Minqi Jiang, Tim Rocktäschel:

Grid-to-Graph: Flexible Spatial Relational Inductive Biases for Reinforcement Learning. 674-682 - Venkateswara Rao Kagita, Arun K. Pujari, Vineet Padmanabhan, Haris Aziz, Vikas Kumar:

Committee Selection using Attribute Approvals. 683-691 - Takehiro Kawasaki, Ryoji Wada, Taiki Todo, Makoto Yokoo:

Mechanism Design for Housing Markets over Social Networks. 692-700 - Shakil M. Khan

, Yves Lespérance:
Knowing Why - On the Dynamics of Knowledge about Actual Causes in the Situation Calculus. 701-709 - Jackson A. Killian, Andrew Perrault, Milind Tambe:

Beyond "To Act or Not to Act": Fast Lagrangian Approaches to General Multi-Action Restless Bandits. 710-718 - Tabajara Krausburg, Jürgen Dix, Rafael H. Bordini:

Feasible Coalition Sequences. 719-727 - Rajiv Ranjan Kumar, Pradeep Varakantham, Shih-Fen Cheng:

Adaptive Operating Hours for Improved Performance of Taxi Fleets. 728-736 - Martin Lackner, Jan Maly:

Approval-Based Shortlisting. 737-745 - Stefan Lauren, Francesco Belardinelli, Francesca Toni:

Aggregating Bipolar Opinions. 746-754 - Omer Lev, Neel Patel, Vignesh Viswanathan, Yair Zick:

The Price is (Probably) Right: Learning Market Equilibria from Samples. 755-763 - Sheng Li, Jayesh K. Gupta, Peter Morales, Ross E. Allen, Mykel J. Kochenderfer:

Deep Implicit Coordination Graphs for Multi-agent Reinforcement Learning. 764-772 - Wenhao Li, Xiangfeng Wang, Bo Jin, Junjie Sheng, Yun Hua, Hongyuan Zha:

Structured Diversification Emergence via Reinforced Organization Control and Hierachical Consensus Learning. 773-781 - Yuyu Li, Jianmin Ji:

Parallel Curriculum Experience Replay in Distributed Reinforcement Learning. 782-789 - Yu Liang, Amulya Yadav:

Let the DOCTOR Decide Whom to Test: Adaptive Testing Strategies to Tackle the COVID-19 Pandemic. 790-798 - Enrico Liscio, Michiel van der Meer, Luciano Cavalcante Siebert, Catholijn M. Jonker, Niek Mouter, Pradeep K. Murukannaiah:

Axies: Identifying and Evaluating Context-Specific Values. 799-808 - Minghuan Liu, Tairan He, Minkai Xu, Weinan Zhang:

Energy-Based Imitation Learning. 809-817 - Zhengshang Liu, Yue Yang, Tim Miller, Peta Masters:

Deceptive Reinforcement Learning for Privacy-Preserving Planning. 818-826 - Emiliano Lorini:

A Logic of Evaluation. 827-835 - Matteo Luperto, Luca Fochetta, Francesco Amigoni:

Exploration of Indoor Environments through Predicting the Layout of Partially Observed Rooms. 836-843 - Xueguang Lyu, Yuchen Xiao, Brett Daley, Christopher Amato:

Contrasting Centralized and Decentralized Critics in Multi-Agent Reinforcement Learning. 844-852 - Xiaoteng Ma, Yiqin Yang, Chenghao Li, Yiwen Lu, Qianchuan Zhao, Jun Yang:

Modeling the Interaction between Agents in Cooperative Multi-Agent Reinforcement Learning. 853-861 - Tejasvi Malladi, Karpagam Murugappan, Depak Sudarsanam, Ramasubramanian Suriyanarayanan, Arunchandar Vasan:

To hold or not to hold? - Reducing Passenger Missed Connections in Airlines using Reinforcement Learning. 862-870 - Peta Masters, Michael Kirley, Wally Smith:

Extended Goal Recognition: A Planning-Based Model for Strategic Deception. 871-879 - Aditya Mate, Andrew Perrault, Milind Tambe:

Risk-Aware Interventions in Public Health: Planning with Restless Multi-Armed Bandits. 880-888 - Giulio Mazzi, Alberto Castellini, Alessandro Farinelli:

Identification of Unexpected Decisions in Partially Observable Monte-Carlo Planning: A Rule-Based Approach. 889-897 - Ramona Merhej, Fernando P. Santos, Francisco S. Melo, Francisco C. Santos:

Cooperation between Independent Reinforcement Learners under Wealth Inequality and Collective Risks. 898-906 - Nieves Montes, Carles Sierra:

Value-Guided Synthesis of Parametric Normative Systems. 907-915 - Francesca Mosca, Jose M. Such:

ELVIRA: An Explainable Agent for Value and Utility-Driven Multiuser Privacy. 916-924 - Muhammad Faizan, Vasanth Sarathy, Gyan Tatiya, Shivam Goel, Saurav Gyawali, Mateo Guaman Castro, Jivko Sinapov, Matthias Scheutz:

A Novelty-Centric Agent Architecture for Changing Worlds. 925-933 - Cyrus Neary, Zhe Xu, Bo Wu, Ufuk Topcu:

Reward Machines for Cooperative Multi-Agent Reinforcement Learning. 934-942 - Thomas Nedelec, Jules Baudet, Vianney Perchet, Noureddine El Karoui:

Adversarial Learning in Revenue-Maximizing Auctions. 955-963 - Yaru Niu, Rohan R. Paleja, Matthew C. Gombolay:

Multi-Agent Graph-Attention Communication and Teaming. 964-973 - Michael Noukhovitch, Travis LaCroix, Angeliki Lazaridou, Aaron C. Courville:

Emergent Communication under Competition. 974-982 - Caspar Oesterheld, Vincent Conitzer:

Safe Pareto Improvements for Delegated Game Playing. 983-991 - Han-Ching Ou, Haipeng Chen, Shahin Jabbari, Milind Tambe:

Active Screening for Recurrent Diseases: A Reinforcement Learning Approach. 992-1000 - Deval Patel, Arindam Khan, Anand Louis:

Group Fairness for Knapsack Problems. 1001-1009 - Manon Prédhumeau, Lyuba Mancheva, Julie Dugdale, Anne Spalanzani:

An Agent-Based Model to Predict Pedestrians Trajectories with an Autonomous Vehicle in Shared Spaces. 1010-1018 - Ben Rachmut, Roie Zivan, William Yeoh:

Latency-Aware Local Search for Distributed Constraint Optimization. 1019-1027 - Md. Musfiqur Rahman, Ayman Rasheed, Md. Mosaddek Khan, Mohammad Ali Javidian, Pooyan Jamshidi, Md. Mamun-Or-Rashid:

Accelerating Recursive Partition-Based Causal Structure Learning. 1028-1036 - Lokman Rahmani, David Minarsch, Jonathan Ward:

Peer-to-peer Autonomous Agent Communication Network. 1037-1045 - Senthil Rajasekaran, Moshe Y. Vardi:

Nash Equilibria in Finite-Horizon Multiagent Concurrent Games. 1046-1054 - Jingyao Ren, Vikraman Sathiyanarayanan, Eric Ewing, Baskin Senbaslar, Nora Ayanian:

MAPFAST: A Deep Algorithm Selector for Multi Agent Path Finding using Shortest Path Embeddings. 1055-1063 - Sebastian Rodriguez, John Thangarajah, Michael Winikoff:

User and System Stories: An Agile Approach for Managing Requirements in AOSE. 1064-1072 - Charlotte Roman, Michael Dennis, Andrew Critch, Stuart Russell:

Accumulating Risk Capital Through Investing in Cooperation. 1073-1081 - Joshua Romoff, Peter Henderson, David Kanaa, Emmanuel Bengio, Ahmed Touati, Pierre-Luc Bacon, Joelle Pineau:

TDprop: Does Adaptive Optimization With Jacobi Preconditioning Help Temporal Difference Learning? 1082-1090 - Heechang Ryu, Hayong Shin, Jinkyoo Park:

Cooperative and Competitive Biases for Multi-Agent Reinforcement Learning. 1091-1099 - Rohan Saphal, Balaraman Ravindran, Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul:

SEERL: Sample Efficient Ensemble Reinforcement Learning. 1100-1108 - Akanksha Saran, Ruohan Zhang, Elaine Schaertl Short, Scott Niekum:

Efficiently Guiding Imitation Learning Agents with Human Gaze. 1109-1117 - Vasanth Sarathy, Daniel Kasenberg, Shivam Goel, Jivko Sinapov, Matthias Scheutz:

SPOTTER: Extending Symbolic Planning Operators through Targeted Reinforcement Learning. 1118-1126 - Amit Sarker, Moumita Choudhury, Md. Mosaddek Khan:

A Local Search Based Approach to Solve Continuous DCOPs. 1127-1135 - Carolyn Saund, Andrei Bîrladeanu, Stacy Marsella:

CMCF: An Architecture for Realtime Gesture Generation by Clustering Gestures by Motion and Communicative Function. 1136-1144 - Grant Schoenebeck, Chenkai Yu, Fang-Yi Yu:

Timely Information from Prediction Markets. 1145-1153 - Nicolas Schwind, Emir Demirovic, Katsumi Inoue, Jean-Marie Lagniez:

Partial Robustness in Team Formation: Bridging the Gap between Robustness and Resilience. 1154-1162 - Ayan Sengupta, Yasser Mohammad, Shinji Nakadai:

An Autonomous Negotiating Agent Framework with Reinforcement Learning based Strategies and Adaptive Strategy Switching Mechanism. 1163-1172 - Anant Shah, Arun Rajkumar:

Sequential Ski Rental Problem. 1173-1181 - Guni Sharon, James Ault, Peter Stone, Varun Raj Kompella, Roberto Capobianco:

Multiagent Epidemiologic Inference through Realtime Contact Tracing. 1182-1190 - Wenlei Shi, Xinran Wei, Jia Zhang, Xiaoyuan Ni, Arthur Jiang, Jiang Bian, Tie-Yan Liu:

Cooperative Policy Learning with Pre-trained Heterogeneous Observation Representations. 1191-1199 - Aditya Shinde, Prashant Doshi, Omid Setayeshfar:

Cyber Attack Intent Recognition and Active Deception using Factored Interactive POMDPs. 1200-1208 - Sujoy Sikdar, Xiaoxi Guo, Haibin Wang, Lirong Xia, Yongzhi Cao:

Sequential Mechanisms for Multi-type Resource Allocation. 1209-1217 - Gustavo R. Silva, Jomi F. Hübner, Leandro Buss Becker:

Active Perception within BDI Agents Reasoning Cycle. 1218-1225 - Thiago D. Simão, Nils Jansen, Matthijs T. J. Spaan:

AlwaysSafe: Reinforcement Learning without Safety Constraint Violations during Training. 1226-1235 - Joseph Singleton, Richard Booth:

Rankings for Bipartite Tournaments via Chain Editing. 1236-1244 - Samuel Spaulding, Jocelyn Shen, Hae Won Park, Cynthia Breazeal:

Towards Transferrable Personalized Student Models in Educational Games. 1245-1253 - Daniel Stan, Anthony W. Lin

:
Regular Model Checking Approach to Knowledge Reasoning over Parameterized Systems. 1254-1262 - Alexander Stannat, Can Umut Ileri, Dion Gijswijt, Johan Pouwelse:

Achieving Sybil-Proofness in Distributed Work Systems. 1263-1271 - Thomas Steeples, Julian Gutierrez, Michael J. Wooldridge:

Mean-Payoff Games with ω-Regular Specifications. 1272-1280 - Ankang Sun, Bo Chen, Xuan Vinh Doan:

Connections between Fairness Criteria and Efficiency for Allocating Indivisible Chores. 1281-1289 - Koh Takeuchi, Ryo Nishida, Hisashi Kashima, Masaki Onishi:

Grab the Reins of Crowds: Estimating the Effects of Crowd Movement Guidance Using Causal Inference. 1290-1298 - Shaojie Tang, Jing Yuan:

Adaptive Cascade Submodular Maximization. 1299-1307 - Shi Yuan Tang, Athirai A. Irissappane, Frans A. Oliehoek, Jie Zhang:

Learning Complex Policy Distribution with CEM Guided Adversarial Hypernetwork. 1308-1316 - Yunhao Tang:

Guiding Evolutionary Strategies with Off-Policy Actor-Critic. 1317-1325 - Federico Toffano, Paolo Viappiani, Nic Wilson:

Efficient Exact Computation of Setwise Minimax Regret for Interactive Preference Elicitation. 1326-1334 - Dimitrios Troullinos, Georgios Chalkiadakis, Ioannis Papamichail, Markos Papageorgiou:

Collaborative Multiagent Decision Making for Lane-Free Autonomous Driving. 1335-1343 - Stef Van Havermaet, Yara Khaluf, Pieter Simoens:

No More Hand-Tuning Rewards: Masked Constrained Policy Optimization for Safe Reinforcement Learning. 1344-1352 - Aravind Venugopal, Elizabeth Bondi, Harshavardhan Kamarthi, Keval Dholakia, Balaraman Ravindran, Milind Tambe:

Reinforcement Learning for Unified Allocation and Patrolling in Signaling Games with Uncertainty. 1353-1361 - Timothy Verstraeten, Pieter-Jan Daems, Eugenio Bargiacchi, Diederik M. Roijers, Pieter J. K. Libin, Jan Helsen:

Scalable Optimization for Wind Farm Control using Coordination Graphs. 1362-1370 - Chenhao Wang, Mengqi Zhang:

Fairness and Efficiency in Facility Location Problems with Continuous Demands. 1371-1379 - Guanhua Wang, Runqi Guo, Yuko Sakurai, Muhammad Ali Babar, Mingyu Guo:

Mechanism Design for Public Projects via Neural Networks. 1380-1388 - Marcin Waniek, Jan Woznica, Kai Zhou, Yevgeniy Vorobeychik, Talal Rahwan, Tomasz P. Michalak:

Strategic Evasion of Centrality Measures. 1389-1397 - Hang Xu, Rundong Wang, Lev Raizman, Zinovi Rabinovich:

Transferable Environment Poisoning: Training-time Attack on Reinforcement Learning. 1398-1406 - Bo Yang, Chaofan Ma, Xiaofang Xia:

Drone Formation Control via Belief-Correlated Imitation Learning. 1407-1415 - Yuan Yao, Natasha Alechina, Brian Logan, John Thangarajah:

Intention Progression using Quantitative Summary Information. 1416-1424 - Nutchanon Yongsatianchot, Stacy Marsella:

A Computational Model of Coping for Simulating Human Behavior in High-Stress Situations. 1425-1433 - Adam Zychowski, Jacek Mandziuk:

Evolution of Strategies in Sequential Security Games. 1434-1442
Extended Abstracts
- Ben Abramowitz, Ehud Shapiro, Nimrod Talmon:

How to Amend a Constitution? Model, Axioms, and Supermajority Rules. 1443-1445 - Enrique Areyan Viqueira, Cyrus Cousins, Amy Greenwald:

Learning Competitive Equilibria in Noisy Combinatorial Markets. 1446-1448 - Nicholas Asher, Julie Hunter:

Interpretive Blindness and the Impossibility of Learning from Testimony. 1449-1451 - Julien Audiffren:

Quantifying Human Perception with Multi-Armed Bandits. 1452-1454 - Michiel A. Bakker, Richard Everett, Laura Weidinger, Iason Gabriel, William S. Isaac, Joel Z. Leibo, Edward Hughes:

Modelling Cooperation in Network Games with Spatio-Temporal Complexity. 1455-1457 - Zev Battad, Mei Si:

Image Sequence Understanding through Narrative Sensemaking. 1458-1460 - Ruben Becker, Gianlorenzo D'Angelo, Hugo Gilbert:

Maximizing Influence-Based Group Shapley Centrality. 1461-1463 - Nicholas Bishop, Le Cong Dinh, Long Tran-Thanh:

How to Guide a Non-Cooperative Learner to Cooperate: Exploiting No-Regret Algorithms in System Design. 1464-1466 - Arpita Biswas, Gaurav Aggarwal, Pradeep Varakantham, Milind Tambe:

Learning Index Policies for Restless Bandits with Application to Maternal Healthcare. 1467-1468 - Diogo S. Carvalho, Joana Campos, Manuel Guimarães, Ana Antunes, João Dias, Pedro A. Santos:

CHARET: Character-centered Approach to Emotion Tracking in Stories. 1469-1471 - Hugo Caselles-Dupré, Michaël Garcia Ortiz, David Filliat:

On the Sensory Commutativity of Action Sequences for Embodied Agents. 1472-1474 - Jacopo Castellini, Sam Devlin, Frans A. Oliehoek, Rahul Savani:

Difference Rewards Policy Gradients. 1475-1477 - Rujikorn Charakorn, Poramate Manoonpong, Nat Dilokthanakul:

Learning to Cooperate with Unseen Agents Through Meta-Reinforcement Learning. 1478-1479 - Theodor Cimpeanu, Cedric Perret, The Anh Han:

Promoting Fair Proposers, Fair Responders or Both? Cost-Efficient Interference in the Spatial Ultimatum Game. 1480-1482 - Stefania Costantini, Andrea Formisano, Valentina Pitoni:

A Logic of Inferable in Multi-Agent Systems with Budget and Costs. 1483-1485 - Brett Daley, Cameron Hickert, Christopher Amato:

Stratified Experience Replay: Correcting Multiplicity Bias in Off-Policy Reinforcement Learning. 1486-1488 - Alaa Daoud, Flavien Balbo, Paolo Gianessi, Gauthier Picard:

A Generic Multi-Agent Model for Resource Allocation Strategies in Online On-Demand Transport with Autonomous Vehicles. 1489-1491 - Ayush Deva, Kumar Abhishek, Sujit Gujar:

A Multi-Arm Bandit Approach To Subset Selection Under Constraints. 1492-1494 - Ylva Ferstl, Michael Neff, Rachel McDonnell:

It's A Match! Gesture Generation Using Expressive Parameter Matching. 1495-1497 - Yuval Gabai Schlosberg, Roie Zivan:

Partially Cooperative Multi-Agent Periodic Indivisible Resource Allocation. 1498-1500 - Marta Garnelo, Wojciech Marian Czarnecki, Siqi Liu, Dhruva Tirumala, Junhyuk Oh, Gauthier Gidel, Hado van Hasselt, David Balduzzi:

Pick Your Battles: Interaction Graphs as Population-Level Objectives for Strategic Diversity. 1501-1503 - Athina Georgara, Juan A. Rodríguez-Aguilar, Carles Sierra:

Towards a Competence-Based Approach to Allocate Teams to Tasks. 1504-1506 - Mirco Giacobbe, Mohammadhosein Hasanbeig, Daniel Kroening, Hjalmar Wijk:

Shielding Atari Games with Bounded Prescience. 1507-1509 - Joseph P. Giordano, Annie S. Wu, Arjun Pherwani, H. David Mathias:

Comparison of Desynchronization Methods for a Decentralized Swarm on a Logistical Resupply Problem. 1510-1511 - Mahak Goindani, Jennifer Neville:

Towards Decentralized Social Reinforcement Learning via Ego-Network Extrapolation. 1512-1514 - Rica Gonen, Erel Segal-Halevi:

A Global Multi-Sided Market with Ascending-Price Mechanism. 1515-1517 - Arnaud Grivet Sébert, Nicolas Maudet, Patrice Perny, Paolo Viappiani:

Rank Aggregation by Dissatisfaction Minimisation in the Unavailable Candidate Model. 1518-1520 - Nathanaël Gross-Humbert, Nawal Benabbou, Aurélie Beynier, Nicolas Maudet:

Sequential and Swap Mechanisms for Public Housing Allocation with Quotas and Neighbourhood-Based Utilities. 1521-1523 - Carla Guerra, Francisco S. Melo, Manuel Lopes:

Teaching Unknown Learners to Classify via Feature Importance. 1524-1526 - Wataru Hatanaka, Fumihiro Sasaki, Ryota Yamashina, Atsuo Kawaguchi:

Simultaneous Learning of Moving and Active Perceptual Policies for Autonomous Robot. 1527-1529 - Conor F. Hayes, Mathieu Reymond, Diederik M. Roijers, Enda Howley, Patrick Mannion:

Distributional Monte Carlo Tree Search for Risk-Aware and Multi-Objective Reinforcement Learning. 1530-1532 - Vincent Hsiao, Xinyue Pan, Dana S. Nau, Rina Dechter:

Approximating Spatial Evolutionary Games using Bayesian Networks. 1533-1535 - Dmitry Ivanov, Vladimir Egorov, Aleksei Shpilman:

Balancing Rational and Other-Regarding Preferences in Cooperative-Competitive Environments. 1536-1538 - Anurag Jain, Shoeb Siddiqui, Sujit Gujar:

We might walk together, but I run faster: Network Fairness and Scalability in Blockchains. 1539-1541 - Pallavi Jain, Krzysztof Sornat, Nimrod Talmon:

Preserving Consistency for Liquid Knapsack Voting. 1542-1544 - Wojciech Jamroga, Wojciech Penczek

, Teofil Sidoruk
:
Strategic Abilities of Asynchronous Agents: Semantic Side Effects. 1545-1547 - Yuan Jiang, Zhiguang Cao, Jie Zhang:

Solving 3D Bin Packing Problem via Multimodal Deep Reinforcement Learning. 1548-1550 - Timotheus Kampik, Juan Carlos Nieves:

Toward Consistent Agreement Approximation in Abstract Argumentation and Beyond. 1551-1553 - Shota Kawajiri, Kazuki Hirashima, Masashi Shiraishi:

Coverage Control under Connectivity Constraints. 1554-1556 - Mehmet Onur Keskin, Umut Çakan, Reyhan Aydogan:

Solver Agent: Towards Emotional and Opponent-Aware Agent for Human-Robot Negotiation. 1557-1559 - Paul A. Knott, Micah Carroll, Sam Devlin, Kamil Ciosek, Katja Hofmann, Anca D. Dragan, Rohin Shah:

Evaluating the Robustness of Collaborative Agents. 1560-1562 - Sonja Kraiczy, Ágnes Cseh, David F. Manlove:

On Weakly and Strongly Popular Rankings. 1563-1565 - Martin Lackner, Jan Maly, Simon Rey:

Fairness in Long-Term Participatory Budgeting. 1566-1568 - Philip Lazos, Francisco J. Marmolejo Cossío, Xinyu Zhou, Jonathan Katz:

RPPLNS: Pay-per-last-N-shares with a Randomised Twist. 1569-1571 - Omer Lev, Wei Lu, Alan Tsang, Yair Zick:

Learning Cooperative Solution Concepts from Voting Behavior: A Case Study on the Israeli Knesset. 1572-1574 - Rotem Lev Lehman, Guy Shani, Roni Stern:

Partial Disclosure of Private Dependencies in Privacy Preserving Planning. 1575-1577 - Fu Li, C. Gregory Plaxton, Vaibhav B. Sinha:

Object Allocation Over a Network of Objects: Mobile Agents with Strict Preferences. 1578-1580 - Jiaoyang Li, Zhe Chen, Daniel Harabor, Peter J. Stuckey, Sven Koenig:

Anytime Multi-Agent Path Finding via Large Neighborhood Search. 1581-1583 - Mickey Li, Arthur Richards, Mahesh Sooriyabandara:

Reliability-Aware Multi-UAV Coverage Path Planning using a Genetic Algorithm. 1584-1586 - Buhong Liu, Maria Polukarov, Carmine Ventre, Lingbo Li, Leslie Kanthan:

Call Markets with Adaptive Clearing Intervals. 1587-1589 - Xiaolong Liu, Weiwei Chen:

Solid Semantics and Extension Aggregation Using Quota Rules under Integrity Constraints. 1590-1592 - Andrei Lupu, Hengyuan Hu, Jakob N. Foerster:

Trajectory Diversity for Zero-Shot Coordination. 1593-1595 - Francisco Martín Rico, Matteo Morelli, Huáscar Espinoza, Francisco J. Rodríguez-Lera, Vicente Matellán Olivera:

Optimized Execution of PDDL Plans using Behavior Trees. 1596-1598 - Katherine Mayo, Michael P. Wellman:

A Strategic Analysis of Portfolio Compression. 1599-1601 - Munyque Mittelmann, Sylvain Bouveret, Laurent Perrussel:

A General Framework for the Logical Representation of Combinatorial Exchange Protocols. 1602-1604 - Anudit Nagar, Cuong Tran, Ferdinando Fioretto:

Privacy-Preserving and Accountable Multi-agent Learning. 1605-1606 - Somjit Nath, Richa Verma, Abhik Ray, Harshad Khadilkar:

SIBRE: Self Improvement Based REwards for Adaptive Feedback in Reinforcement Learning. 1607-1609 - David O'Callaghan, Patrick Mannion:

Tunable Behaviours in Sequential Social Dilemmas using Multi-Objective Reinforcement Learning. 1610-1612 - Takato Okudo, Seiji Yamada:

Online Learning of Shaping Reward with Subgoal Knowledge. 1613-1615 - P. Parnika, Raghuram Bharadwaj Diddigi, Sai Koti Reddy Danda, Shalabh Bhatnagar:

Attention Actor-Critic Algorithm for Multi-Agent Constrained Co-operative Reinforcement Learning. 1616-1618 - Michael Pernpeintner:

Toward a Self-Learning Governance Loop for Competitive Multi-Attribute MAS. 1619-1621 - Hedieh Ranjbartabar, Deborah Richards, Ayse Aysin Bilgin, Cat Kutay:

Personalising the Dialogue of Relational Agents for First-Time Users. 1622-1624 - Sachit Rao, Shrisha Rao:

Finite-time Consensus in the Presence of Malicious Agents. 1625-1627 - Thomas Robinson, Guoxin Su, Minjie Zhang:

Multiagent Task Allocation and Planning with Multi-Objective Requirements. 1628-1630 - Alejandro Romero, Francisco Bellas, Richard J. Duro:

An Autonomous Drive Balancing Strategy for the Design of Purpose in Open-ended Learning Robots. 1631-1633 - Leonardo Rosa Amado, Ramon Fraga Pereira, Felipe Meneguzzi:

Combining LSTMs and Symbolic Approaches for Robust Plan Recognition. 1634-1636 - Enna Sachdeva, Shauharda Khadka, Somdeb Majumdar, Kagan Tumer:

Dynamic Skill Selection for Learning Joint Actions. 1637-1639 - Sandhya Saisubramanian, Shlomo Zilberstein:

Mitigating Negative Side Effects via Environment Shaping. 1640-1642 - Fernando P. Santos, Francisco C. Santos, Jorge M. Pacheco, Simon A. Levin:

Social Network Interventions to Prevent Reciprocity-driven Polarization. 1643-1645 - Aron Sarmasi, Timothy Zhang, Chu-Hung Cheng, Huyen Pham, Xuanchen Zhou, Duong Nguyen, Soumil Shekdar, Joshua McCoy:

HOAD: The Hanabi Open Agent Dataset. 1646-1648 - Gal Shahaf, Ehud Shapiro, Nimrod Talmon:

Egalitarian and Just Digital Currency Networks. 1649-1651 - Shusuke Shigenaka, Shunki Takami, Shuhei Watanabe, Yuki Tanigaki, Yoshihiko Ozaki, Masaki Onishi:

MAS-Bench: Parameter Optimization Benchmark for Multi-agent Crowd Simulation. 1652-1654 - Arambam James Singh, Akshat Kumar, Hoong Chuin Lau:

Approximate Difference Rewards for Scalable Multigent Reinforcement Learning. 1655-1657 - Ghada Sokar, Decebal Constantin Mocanu, Mykola Pechenizkiy:

Self-Attention Meta-Learner for Continual Learning. 1658-1660 - Errikos Streviniotis, Athina Georgara, Georgios Chalkiadakis:

A Succinct Representation Scheme for Cooperative Games under Uncertainty. 1661-1663 - Filipo Studzinski Perotto, Sattar Vakili, Pratik Gajane, Yaser Faghan, Mathieu Bourgais:

Gambler Bandits and the Regret of Being Ruined. 1664-1667 - Chuxiong Sun, Bo Wu, Rui Wang, Xiaohui Hu, Xiaoya Yang, Cong Cong:

Intrinsic Motivated Multi-Agent Communication. 1668-1670 - Wei-Fang Sun, Cheng-Kuang Lee, Chun-Yi Lee:

A Distributional Perspective on Value Function Factorization Methods for Multi-Agent Reinforcement Learning. 1671-1673 - Michal Sustr, Martin Schmid, Matej Moravcík, Neil Burch, Marc Lanctot, Michael Bowling:

Sound Algorithms in Imperfect Information Games. 1674-1676 - Atena M. Tabakhi, Yuanming Xiao, William Yeoh, Roie Zivan:

Branch-and-Bound Heuristics for Incomplete DCOPs. 1677-1679 - Alok Talekar, Sharad Shriram, Nidhin K. Vaidhiyan, Gaurav Aggarwal, Jiangzhuo Chen, Srinivasan Venkatramanan, Lijing Wang, Aniruddha Adiga, Adam Sadilek, Ashish Tendulkar, Madhav V. Marathe, Rajesh Sundaresan, Milind Tambe:

Cohorting to Isolate Asymptomatic Spreaders: An Agent-Based Simulation Study on the Mumbai Suburban Railway. 1680-1682 - Andreia Sofia Teixeira, Francisco C. Santos, Alexandre P. Francisco, Fernando P. Santos:

Eliciting Fairness in Multiplayer Bargaining through Network-Based Role Assignment. 1683-1685 - Paul Tylkin, Goran Radanovic, David C. Parkes:

Learning Robust Helpful Behaviors in Two-Player Cooperative Atari Environments. 1686-1688 - Shresth Verma:

Towards Sample Efficient Learners in Population based Referential Games through Action Advising. 1689-1691 - Hang Wang, Sen Lin, Hamid Jafarkhani, Junshan Zhang:

Distributed Q-Learning with State Tracking for Multi-agent Networked Control. 1692-1694 - Qian Wang, Yurong Chen:

The Tight Bound for Pure Price of Anarchy in an Extended Miner's Dilemma Game. 1695-1697 - Stephen G. Ware, Cory Siler:

The Sabre Narrative Planner: Multi-Agent Coordination with Intentions and Beliefs. 1698-1700 - Shiqing Wu, Quan Bai, Weihua Li:

Learning Policies for Effective Incentive Allocation in Unknown Social Networks. 1701-1703 - Xiang Yan, Yiling Chen:

Optimal Crowdfunding Design. 1704-1706 - Leonid Zeynalvand, Tie Luo, Ewa Andrejczuk, Dusit Niyato, Sin G. Teo, Jie Zhang:

A Blockchain-Enabled Quantitative Approach to Trust and Reputation Management with Sparse Evidence. 1707-1708 - Mingyue Zhang, Zhi Jin, Yang Xu, Zehan Shen, Kun Liu, Keyu Pan:

Fast Adaptation to External Agents via Meta Imitation Counterfactual Regret Advantage. 1709-1711 - Luisa M. Zintgraf, Sam Devlin, Kamil Ciosek, Shimon Whiteson, Katja Hofmann:

Deep Interactive Bayesian Reinforcement Learning via Meta-Learning. 1712-1714
JAAMAS Track
- Babatunde Opeoluwa Akinkunmi, Moyin Florence Babalola:

A Norm Enforcement Mechanism for a Time-Constrained Conditional Normative Framework. 1715-1717 - Rafael H. Bordini, Amal El Fallah Seghrouchni, Koen V. Hindriks, Brian Logan, Alessandro Ricci:

Agent Programming in the Cognitive Era. 1718-1720 - Roberta Calegari, Giovanni Ciatto, Viviana Mascardi, Andrea Omicini:

Logic-based Technologies for Multi-agent Systems: Summary of a Systematic Literature Review. 1721-1723 - Angelo Croatti, Alessandro Ricci:

Programming Agent-based Mobile Apps: The JaCa-Android Framework. 1724-1726 - Riccardo De Masellis, Valentin Goranko:

Logic-based Specification and Verification of Homogeneous Dynamic Multi-agent Systems. 1727-1729 - Edmund H. Durfee, Abhishek Thakur, Eli Goldweber:

On Teammate-Pattern-Aware Autonomy. 1730-1732 - Michael Fisher, Viviana Mascardi, Kristin Y. Rozier, Bernd-Holger Schlingloff, Michael Winikoff, Neil Yorke-Smith:

Summarising a Framework for the Certification of Reliable Autonomous Systems. 1733-1734 - Guangliang Li, Hamdi Dibeklioglu, Shimon Whiteson, Hayley Hung:

Facial Feedback for Reinforcement Learning: A Case Study and Offline Analysis Using the TAMER Framework. 1735-1737 - Anis Najar, Olivier Sigaud, Mohamed Chetouani:

Teaching a Robot with Unlabeled Instructions: The TICS Architecture. 1738-1739 - Harish Ravichandar, Kenneth Shaw, Sonia Chernova:

STRATA: Unified Framework for Task Assignments in Large Teams of Heterogeneous Agents. 1740-1742 - Arles Rodríguez, Jonatan Gómez, Ada Diaconescu:

A Decentralised Self-Healing Approach for Network Topology Maintenance. 1743-1745 - Yang Xiang, Abdulrahman Alshememry:

Constructing Junction Tree Agent Organization with Privacy. 1746-1748
Demonstration Track
- Jaime Arias, Wojciech Penczek

, Laure Petrucci, Teofil Sidoruk
:
ADT2AMAS: Managing Agents in Attack-Defence Scenarios. 1749-1751 - Matteo Baldoni, Cristina Baroglio, Olivier Boissier, Roberto Micalizio, Stefano Tedeschi:

Distributing Responsibilities for Exception Handling in JaCaMo. 1752-1754 - Chaithanya Basrur, Arambam James Singh, Arunesh Sinha, Akshat Kumar:

Ship-GAN: Generative Modeling Based Maritime Traffic Simulator. 1755-1757 - Zehong Cao, Jie Yun:

An Online Human-Agent Interaction System: A Brain-controlled Agent Playing Games in Unity. 1758-1760 - Adam Dejl, Chloe He, Pranav Mangal, Hasan Mohsin, Bogdan Surdu, Eduard Voinea, Emanuele Albini, Piyawat Lertvittayakumjorn, Antonio Rago, Francesca Toni:

Argflow: A Toolkit for Deep Argumentative Explanations for Neural Networks. 1761-1763 - Angelo Ferrando, Vadim Malvone:

Strategy RV: A Tool to Approximate ATL Model Checking under Imperfect Information and Perfect Recall. 1764-1766 - Timotheus Kampik, Andres Gomez, Andrei Ciortea, Simon Mayer:

Autonomous Agents on the Edge of Things. 1767-1769 - Damian Kurpiewski, Witold Pazderski, Wojciech Jamroga, Yan Kim:

STV+Reductions: Towards Practical Verification of Strategic Ability Using Model Reductions. 1770-1772 - Enrico Liscio, Michiel van der Meer, Catholijn M. Jonker, Pradeep K. Murukannaiah:

A Collaborative Platform for Identifying Context-Specific Values. 1773-1775 - Gilberto Marcon dos Santos, Julie A. Adams:

Scalable Multiple Robot Task Planning with Plan Merging and Conflict Resolution. 1776-1778 - Rajmund Nagy, Taras Kucherenko, Birger Moëll, André Pereira, Hedvig Kjellström, Ulysses Bernardet:

A Framework for Integrating Gesture Generation Models into Interactive Conversational Agents. 1779-1781 - Alexandros Nikou, Anusha Mujumdar, Marin Orlic, Aneta Vulgarakis Feljan:

Symbolic Reinforcement Learning for Safe RAN Control. 1782-1784 - Jacobus G. M. van der Linden, Jesse Mulderij, Bob Huisman, Joris W. den Ouden, Marjan van den Akker, Han Hoogeveen, Mathijs Michiel de Weerdt:

TORS: A Train Unit Shunting and Servicing Simulator. 1785-1787 - Yifeng Zeng, Zhangrui Yao, Yinghui Pan, Wanqing Chen, Junxin Zhou, Junhan Chen, Biyang Ma, Zhong Ming:

ATPT: Automate Typhoon Contingency Plan Generation from Text. 1788-1790 - Lan Zhang, Weihua Li, Quan Bai, Edmund M.-K. Lai:

Graph-based Self-Adaptive Conversational Agent. 1791-1793
Doctoral Consortium
- Ben Armstrong:

Exploring the Relationship Between Social Choice and Machine Learning. 1794-1796 - Jennifer Boyd:

Understanding the Role of Inequality in Creating and Sustaining the Alcohol Harm Paradox using Agent-Based Modelling. 1797-1798 - Martin Bullinger:

Computing Desirable Outcomes in Specific Multi-Agent Scenarios. 1799-1801 - Rachael Colley:

Multi-Agent Ranked Delegations in Voting. 1802-1804 - José Aleixo Cruz:

Learning Realistic and Safe Pedestrian Behavior by Imitation. 1805-1807 - Hossein Haeri:

Reward-Sharing Relational Networks in Multi-Agent Reinforcement Learning as a Framework for Emergent Behavior. 1808-1810 - Naieme Hazrati:

Impact of Recommender Systems on the Dynamics of Users' Choices. 1811-1813 - Zahoor Ul Islam:

Software Engineering Methods for Responsible Artificial Intelligence. 1814-1815 - Jihyun Jeong:

Leveraging Social Interactions in Human-Agent Decision-Making. 1816-1817 - Alexander Lam:

Balancing Fairness, Efficiency and Strategy-Proofness in Voting and Facility Location Problems. 1818-1819 - Matthew V. Law:

Intention-Aware Human-Robot Collaborative Design. 1820-1822 - Patrick Lederer:

Non-manipulability in Set-valued and Probabilistic Social Choice Theory. 1823-1825 - Siddharth Mehrotra:

Modelling Trust in Human-AI Interaction. 1826-1828 - Manon Prédhumeau:

Simulating Realistic Pedestrian Behaviors in the Context of Autonomous Vehicles in Shared Spaces. 1829-1831 - Fatemeh Rastgar:

Exploiting Hidden Convexities for Real-time and Reliable Optimization Algorithms for Challenging Motion Planning and Control Applications. 1832-1834 - Peter Stringer:

Adaptable and Verifiable BDI Reasoning. 1835-1836 - Shi Yuan Tang:

Improving Sample-based Reinforcement Learning through Complex Non-parametric Distributions. 1837-1839 - Carlo Taticchi:

A Concurrent Language for Negotiation and Debate with Argumentation. 1840-1841 - Vignesh Viswanathan:

Computing using Samples: Theoretical Guarantees with the Direct Learning Approach. 1842-1844 - Youssef Mahmoud Youssef:

Inducing Rules about Distributed Robotic Systems for Fault Detection & Diagnosis. 1845-1847 - Sixie Yu:

Design and Analysis of Networks under Strategic Behavior. 1848-1849 - Mengqi Zhang:

Mechanism Design in Facility Location Games. 1850-1852

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID














