default search action
19th AAMAS 2020: Auckland, New Zealand
- Amal El Fallah Seghrouchni, Gita Sukthankar, Bo An, Neil Yorke-Smith:
Proceedings of the 19th International Conference on Autonomous Agents and Multiagent Systems, AAMAS '20, Auckland, New Zealand, May 9-13, 2020. International Foundation for Autonomous Agents and Multiagent Systems 2020, ISBN 978-1-4503-7518-4
Keynote Talks
- Carla P. Gomes:
AI for Advancing Scientific Discovery for a Sustainable Future. 1 - Thore Graepel:
Automatic Curricula in Deep Multi-Agent Reinforc ement Learning. 2 - Alison J. Heppenstall, Nick Malleson:
Building Cities from Slime Mould, Agents and Quantum Field Theory. 3-4 - Sergey Levine:
Unsupervised Reinforcement Learning. 5-6
Research Papers
- Yehia Abd Alrahman, Giuseppe Perelli, Nir Piterman:
Reconfigurable Interaction for MAS Modelling. 7-15 - Nirav Ajmeri, Hui Guo, Pradeep K. Murukannaiah, Munindar P. Singh:
Elessar: Ethics in Norm-Aware Agents. 16-24 - Michael E. Akintunde, Elena Botoeva, Panagiotis Kouvaros, Alessio Lomuscio:
Formal Verification of Neural Agents in Non-deterministic Environments. 25-33 - Shaull Almagor, Morteza Lahijanian:
Explainable Multi Agent Path Finding. 34-42 - Yackolley Amoussou-Guenou, Bruno Biais, Maria Potop-Butucaru, Sara Tucci Piergiovanni:
Rational vs Byzantine Players in Consensus-based Blockchains. 43-51 - Merlinda Andoni, Valentin Robu, Wolf-Gerrit Früh, David Flynn:
Strategic Decision-Making for Power Network Investments with Distributed Renewable Generation. 52-60 - Alessia Antelmi, Gennaro Cordasco, Carmine Spagnuolo, Vittorio Scarano:
A Design-Methodology for Epidemic Dynamics via Time-Varying Hypergraphs. 61-69 - Antonios Antoniadis, Andrés Cristi, Tim Oosterwijk, Alkmini Sgouritsa:
A General Framework for Energy-Efficient Cloud Computing Mechanisms. 70-78 - Enrique Areyan Viqueira, Cyrus Cousins, Amy Greenwald:
Improved Algorithms for Learning Equilibria in Simulation-Based Games. 79-87 - James Ault, Josiah P. Hanna, Guni Sharon:
Learning an Interpretable Traffic Signal Control Policy. 88-96 - Haris Aziz, Anton Baychkov, Péter Biró:
Summer Internship Matching with Funding Constraints. 97-104 - Davide Azzalini, Alberto Castellini, Matteo Luperto, Alessandro Farinelli, Francesco Amigoni:
HMMs for Anomaly Detection in Autonomous Robots. 105-113 - Nathanaël Barrot, Sylvaine Lemeilleur, Nicolas Paget, Abdallah Saffidine:
Peer Reviewing in Participatory Guarantee Systems: Modelisation and Algorithmic Aspects. 114-122 - Connor Basich, Justin Svegliato, Kyle Hollins Wray, Stefan J. Witwicki, Joydeep Biswas, Shlomo Zilberstein:
Learning to Optimize Autonomy in Competence-Aware Systems. 123-131 - Dorothea Baumeister, Ann-Kathrin Selker, Anaëlle Wilczynski:
Manipulation of Opinion Polls to Influence Iterative Elections. 132-140 - Ryan Beal, Georgios Chalkiadakis, Timothy J. Norman, Sarvapali D. Ramchurn:
Optimising Game Tactics for Football. 141-149 - Xiaohui Bei, Shengxin Liu, Chung Keung Poon, Hongao Wang:
Candidate Selections with Proportional Fairness Constraints. 150-158 - Matteo Bellusci, Nicola Basilico, Francesco Amigoni:
Multi-Agent Path Finding in Configurable Environments. 159-167 - Arthur Boixel, Ulle Endriss:
Automated Justification of Collective Decisions via Constraint Solving. 168-176 - Iago Bonnici, Abdelkader Gouaïch, Fabien Michel:
Input Addition and Deletion in Reinforcement: Towards Learning with Structural Changes. 177-185 - Sirin Botan, Ulle Endriss:
Majority-Strategyproofness in Judgment Aggregation. 186-194 - Felix Brandt, Martin Bullinger:
Finding and Recognizing Popular Coalition Structures. 195-203 - Jan Bürmann, Enrico H. Gerding, Baharak Rastegari:
Fair Allocation of Resources with Uncertain Availability. 204-212 - Martin Bullinger:
Pareto-Optimality in Cardinal Hedonic Games. 213-221 - Yaniel Carreno, Èric Pairet, Yvan R. Pétillot, Ronald P. A. Petrick:
Task Allocation Strategy for Heterogeneous Robot Teams in Offshore Missions. 222-230 - Mithun Chakraborty, Ayumi Igarashi, Warut Suksompong, Yair Zick:
Weighted Envy-Freeness in Indivisible Item Allocation. 231-239 - Hau Chan, Mohammad T. Irfan, Cuong Viet Than:
Schelling Models with Localized Social Influence: A Game-Theoretic Framework. 240-248 - Ziyu Chen, Wenxin Zhang, Yanchen Deng, Dingding Chen, Qiang Li:
RMB-DPOP: Refining MB-DPOP by Reducing Redundant Inference. 249-257 - Samuel H. Christie V., Amit K. Chopra, Munindar P. Singh:
Refinement for Multiagent Protocols. 258-266 - Murat Cubuktepe, Zhe Xu, Ufuk Topcu:
Policy Synthesis for Factored MDPs with Graph Temporal Logic Specifications. 267-275 - Gianlorenzo D'Angelo, Mattia D'Emidio, Shantanu Das, Alfredo Navarra, Giuseppe Prencipe:
Leader Election and Compaction for Asynchronous Silent Programmable Matter. 276-284 - Michael Dann, John Thangarajah, Yuan Yao, Brian Logan:
Intention-Aware Multiagent Scheduling. 285-293 - Giuseppe De Giacomo, Yves Lespérance:
Goal Formation through Interaction in the Situation Calculus: A Formal Account Grounded in Behavioral Science. 294-302 - Frits de Nijs, Peter J. Stuckey:
Risk-Aware Conditional Replanning for Globally Constrained Multi-Agent Sequential Decision Making. 303-311 - Greg d'Eon, Kate Larson:
Testing Axioms Against Human Reward Divisions in Cooperative Games. 312-320 - Palash Dey, Sourav Medya:
Manipulating Node Similarity Measures in Networks. 321-329 - Gaurav Dixit, Stéphane Airiau, Kagan Tumer:
Gaussian Processes as Multiagent Reward Models. 330-338 - Ryan D'Orazio, Dustin Morrill, James R. Wright, Michael Bowling:
Alternative Function Approximation Parameterizations for Solving Games: An Analysis of ƒ-Regression Counterfactual Regret Minimization. 339-347 - Yihan Du, Siwei Wang, Longbo Huang:
Dueling Bandits: From Two-dueling to Multi-dueling. 348-356 - Abhimanyu Dubey, Alex Pentland:
Private and Byzantine-Proof Cooperative Decision-Making. 357-365 - Edith Elkind, Piotr Faliszewski, Sushmita Gupta, Sanjukta Roy:
Algorithms for Swap and Shift Bribery in Structured Elections. 366-374 - Mirgita Frasheri, José Manuel Cano-García, Eva González-Parada, Baran Çürüklü, Mikael Ekström, Alessandro Vittorio Papadopoulos, Cristina Urdiales:
Adaptive Autonomy in Wireless Sensor Networks. 375-383 - Rupert Freeman, Sujoy Sikdar, Rohit Vaish, Lirong Xia:
Equitable Allocations of Indivisible Chores. 384-392 - Kobi Gal, Ta Duy Nguyen, Quang Nhat Tran, Yair Zick:
Threshold Task Games: Theory, Platform and Experiments. 393-401 - Jiarui Gan, Edith Elkind, Sarit Kraus, Michael J. Wooldridge:
Mechanism Design for Defense Coordination in Security Games. 402-410 - Sriram Ganapathi Subramanian, Pascal Poupart, Matthew E. Taylor, Nidhi Hegde:
Multi Type Mean Field Reinforcement Learning. 411-419 - Jugal Garg, Peter McGlaughlin:
Computing Competitive Equilibria with Mixed Manna. 420-428 - Felix Gervits, Dean Thurston, Ravenna Thielstrom, Terry Fong, Quinn Pham, Matthias Scheutz:
Toward Genuine Robot Teammates: Improving Human-Robot Team Performance Using Robot Shared Mental Models. 429-437 - Sina Ghiassian, Banafsheh Rafiee, Yat Long Lo, Adam White:
Improving Performance in Reinforcement Learning by Breaking Generalization in Neural Networks. 438-446 - Ahana Ghosh, Sebastian Tschiatschek, Hamed Mahdavi, Adish Singla:
Towards Deployment of Robust Cooperative AI Agents: An Algorithmic Framework for Learning Adaptive Policies. 447-455 - Hugo Gimbert, Soumyajit Paul, B. Srivathsan:
A Bridge between Polynomial Optimization and Games with Imperfect Recall. 456-464 - Vinicius G. Goecks, Gregory M. Gremillion, Vernon J. Lawhern, John Valasek, Nicholas R. Waytowich:
Integrating Behavior Cloning and Reinforcement Learning for Improved Performance in Dense and Sparse Reward Environments. 465-473 - John Harwell, London Lowmanstone, Maria L. Gini:
Demystifying Emergent Intelligence and Its Effect on Performance In Large Robot Swarms. 474-482 - Mohammadhosein Hasanbeig, Alessandro Abate, Daniel Kroening:
Cautious Reinforcement Learning with Logical Constraints. 483-491 - Daniel Hennes, Dustin Morrill, Shayegan Omidshafiei, Rémi Munos, Julien Pérolat, Marc Lanctot, Audrunas Gruslys, Jean-Baptiste Lespiau, Paavo Parmas, Edgar A. Duéñez-Guzmán, Karl Tuyls:
Neural Replicator Dynamics: Multiagent Learning via Hedging Policy Gradients. 492-501 - Khoi D. Hoang, William Yeoh, Makoto Yokoo, Zinovi Rabinovich:
New Algorithms for Continuous Distributed Constraint Optimization Problems. 502-510 - Safwan Hossain, Nisarg Shah:
The Effect of Strategic Noise in Linear Regression. 511-519 - David Earl Hostallero, Daewoo Kim, Sangwoo Moon, Kyunghwan Son, Wan Ju Kang, Yung Yi:
Inducing Cooperation through Reward Reshaping based on Peer Evaluations in Deep Multi-Agent Reinforcement Learning. 520-528 - Taoan Huang, Weiran Shen, David Zeng, Tianyu Gu, Rohit Singh, Fei Fang:
Green Security Game with Community Engagement. 529-537 - Edward Hughes, Thomas W. Anthony, Tom Eccles, Joel Z. Leibo, David Balduzzi, Yoram Bachrach:
Learning to Resolve Alliance Dilemmas in Many-Player Zero-Sum Games. 538-547 - Léonard Hussenot, Matthieu Geist, Olivier Pietquin:
CopyCAT: : Taking Control of Neural Policies with Constant Attacks. 548-556 - Matthew Inkawhich, Yiran Chen, Hai Helen Li:
Snooping Attacks on Deep Reinforcement Learning. 557-565 - Gabriel Istrate, Cosmin Bonchis, Claudiu Gatina:
It's Not Whom You Know, It's What You, or Your Friends, Can Do: Coalitional Frameworks for Network Centralities. 566-574 - Harshavardhan Kamarthi, Priyesh Vijayan, Bryan Wilder, Balaraman Ravindran, Milind Tambe:
Influence Maximization in Unknown Social Networks: Learning Policies for Effective Graph Sampling. 575-583 - Naoyuki Kamiyama:
On Stable Matchings with Pairwise Preferences and Matroid Constraints. 584-592 - Ian A. Kash, Michael Sullins, Katja Hofmann:
Combining No-regret and Q-learning. 593-601 - Yasushi Kawase:
Approximately Stable Matchings with General Constraints. 602-610 - David Kempe, Sixie Yu, Yevgeniy Vorobeychik:
Inducing Equilibria in Networked Public Goods Games through Network Structure Modification. 611-619 - Dong-Ki Kim, Miao Liu, Shayegan Omidshafiei, Sebastian Lopez-Cot, Matthew Riemer, Golnaz Habibi, Gerald Tesauro, Sami Mourad, Murray Campbell, Jonathan P. How:
Learning Hierarchical Teaching Policies for Cooperative Agents. 620-628 - David Klaska, Antonín Kucera, Vojtech Rehák:
Adversarial Patrolling with Drones. 629-637 - Grammateia Kotsialou, Luke Riley:
Incentivising Participation in Liquid Democracy with Breadth-First Delegation. 638-644 - Justin Kruger, Zoi Terzopoulou:
Strategic Manipulation with Incomplete Preferences: Possibilities and Impossibilities for Positional Scoring Rules. 645-653 - Chris J. Kuhlman, Achla Marathe, Anil Vullikanti, Nafisa Halim, Pallab Mozumder:
Increasing Evacuation during Disaster Events. 654-662 - Soh Kumabe, Takanori Maehara:
Convexity of Hypergraph Matching Game. 663-671 - Hian Lee Kwa, Jabez Leong Kit, Roland Bouffanais:
Optimal Swarm Strategy for Dynamic Target Search and Tracking. 672-680 - Salvatore La Torre, Gennaro Parlato:
On the Model-Checking of Branching-time Temporal Logic with BDI Modalities. 681-689 - Yaqing Lai, Wufan Wang, Yunjie Yang, Jihong Zhu, Minchi Kuang:
Hindsight Planner. 690-698 - Christopher Leturc, Grégory Bonnet:
A Deliberate BIAT Logic for Modeling Manipulations. 699-707 - Bo Li, Yingkai Li:
Fair Resource Sharing and Dorm Assignment. 708-716 - Henger Li, Wen Shen, Zizhan Zheng:
Spatial-Temporal Moving Target Defense: A Markov Stackelberg Game Model. 717-725 - Jiaoyang Li, Kexuan Sun, Hang Ma, Ariel Felner, T. K. Satish Kumar, Sven Koenig:
Moving Agents in Formation in Congested Environments. 726-734 - Paul Pu Liang, Jeffrey Chen, Ruslan Salakhutdinov, Louis-Philippe Morency, Satwik Kottur:
On Emergent Communication in Competitive Multi-Agent Teams. 735-743 - Baihan Lin, Guillermo A. Cecchi, Djallel Bouneffouf, Jenna M. Reinen, Irina Rish:
A Story of Two Streams: Reinforcement Learning Models from Human Behavior and Neuropsychiatry. 744-752 - Anji Liu, Yitao Liang, Guy Van den Broeck:
Off-Policy Deep Reinforcement Learning with Analogous Disentangled Exploration. 753-761 - Alessio Lomuscio, Edoardo Pirovano:
Parameterised Verification of Strategic Properties in Probabilistic Multi-Agent Systems. 762-770 - Meghna Lowalekar, Pradeep Varakantham, Patrick Jaillet:
Competitive Ratios for Online Multi-capacity Ridesharing. 771-779 - Yuan Luo, Nicholas R. Jennings:
A Budget-Limited Mechanism for Category-Aware Crowdsourcing Systems. 780-788 - Andrei Lupu, Doina Precup:
Gifting in Multi-Agent Reinforcement Learning. 789-797 - Xueguang Lyu, Christopher Amato:
Likelihood Quantile Networks for Coordinating Multi-Agent Reinforcement Learning. 798-806 - Hongyao Ma, Reshef Meir, David C. Parkes, Elena Wu-Yan:
Penalty Bidding Mechanisms for Allocating Resources and Overcoming Present-Bias. 807-815 - Jinming Ma, Feng Wu:
Feudal Multi-Agent Deep Reinforcement Learning for Traffic Signal Control. 816-824 - Saaduddin Mahmud, Moumita Choudhury, Md. Mosaddek Khan, Long Tran-Thanh, Nicholas R. Jennings:
AED: An Anytime Evolutionary DCOP Algorithm. 825-833 - Alberto Marchesi, Francesco Trovò, Nicola Gatti:
Learning Probably Approximately Correct Maximin Strategies in Simulation-Based Games with Infinite Strategy Spaces. 834-842 - Gilberto Marcon dos Santos, Julie A. Adams:
Optimal Temporal Plan Merging. 851-859 - Eric Mazumdar, Lillian J. Ratliff, Michael I. Jordan, S. Shankar Sastry:
Policy-Gradient Algorithms Have No Guarantees of Convergence in Linear Quadratic Games. 860-868 - Kevin R. McKee, Ian Gemp, Brian McWilliams, Edgar A. Duéñez-Guzmán, Edward Hughes, Joel Z. Leibo:
Social Diversity and Social Preferences in Mixed-Motive Reinforcement Learning. 869-877 - Congcong Miao, Jilong Wang, Heng Yu, Weichen Zhang, Yinyao Qi:
Trajectory-User Linking with Attentive Recurrent Network. 878-886 - Aniket Murhekar, Ruta Mehta:
Approximate Nash Equilibria of Imitation Games: Algorithms and Complexity. 887-894 - Goran Muric, Alexey Tregubov, Jim Blythe, Andrés Abeliuk, Divya Choudhary, Kristina Lerman, Emilio Ferrara:
Massive Cross-Platform Simulations of Online Social Networks. 895-903 - Pavel Naumov, Jia Tao:
Duty to Warn in Strategic Games. 904-912 - Grigory Neustroev, Mathijs Michiel de Weerdt:
Generalized Optimistic Q-Learning with Provable Efficiency. 913-921 - Marc Neveling, Jörg Rothe:
The Complexity of Cloning Candidates in Multiwinner Elections. 922-930 - Xiaodong Nian, Athirai Aravazhi Irissappane, Diederik M. Roijers:
DCRAC: Deep Conditioned Recurrent Actor-Critic for Multi-Objective Partially Observable Environments. 931-938 - Chris Nota, Philip S. Thomas:
Is the Policy Gradient a Gradient? 939-947 - Alessandro Nuara, Francesco Trovò, Dominic Crippa, Nicola Gatti, Marcello Restelli:
Driving Exploration by Maximum Distribution in Gaussian Process Bandits. 948-956 - Svetlana Obraztsova, Maria Polukarov, Edith Elkind, Marek Grzesiuk:
Multiwinner Candidacy Games. 957-965 - Stefan Olafsson, Byron C. Wallace, Timothy W. Bickmore:
Towards a Computational Framework for Automating Substance Use Counseling with Virtual Agents. 966-974 - Declan Oller, Tobias Glasmachers, Giuseppe Cuccu:
Analyzing Reinforcement Learning Benchmarks with Random Weight Guessing. 975-982 - Yaniv Oshrat, Noa Agmon, Sarit Kraus:
Non-Uniform Policies for Multi-Robot Asymmetric Perimeter Patrol in Adversarial Domains. 983-991 - Han-Ching Ou, Arunesh Sinha, Sze-Chuan Suen, Andrew Perrault, Alpan Raval, Milind Tambe:
Who and When to Screen: Multi-Round Active Screening for Network Recurrent Infectious Diseases Under Uncertainty. 992-1000 - Ling Pan, Qingpeng Cai, Longbo Huang:
Multi-Path Policy Optimization. 1001-1009 - Dhaval Parmar, Stefán Ólafsson, Dina Utami, Prasanth Murali, Timothy W. Bickmore:
Navigating the Combinatorics of Virtual Agent Design Space to Maximize Persuasion. 1010-1018 - Lukasz Pelcner, Shaling Li, Matheus Aparecido do Carmo Alves, Leandro Soriano Marcolino, Alex Collins:
Real-time Learning and Planning in Environments with Swarms: A Hierarchical and a Parameter-based Simulation Approach. 1019-1027 - Florian Pescher, Nils Napp, Benoît Piranda, Julien Bourgeois:
GAPCoD: A Generic Assembly Planner by Constrained Disassembly. 1028-1036 - Lasse Peters, David Fridovich-Keil, Claire J. Tomlin, Zachary N. Sunberg:
Inference-Based Strategy Alignment for General-Sum Differential Games. 1037-1045 - Geoffrey Pettet, Ayan Mukhopadhyay, Mykel J. Kochenderfer, Yevgeniy Vorobeychik, Abhishek Dubey:
On Algorithmic Decision Procedures in Emergency Response Systems in Smart and Connected Communities. 1046-1054