


default search action
18th AAMAS 2019: Montreal, QC, Canada
- Edith Elkind, Manuela Veloso, Noa Agmon, Matthew E. Taylor:

Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS '19, Montreal, QC, Canada, May 13-17, 2019. International Foundation for Autonomous Agents and Multiagent Systems 2019, ISBN 978-1-4503-6309-9
Keynote Talks
- Subbarao Kambhampati:

Synthesizing Explainable Behavior for Human-AI Collaboration. 1-2 - Francesca Rossi, Andrea Loreggia:

Preferences and Ethical Priorities: Thinking Fast and Slow in AI. 3-4 - Carles Sierra:

Responsible Autonomy. 5 - Doina Precup:

Building Knowledge for AI Agents with Reinforcement Learning. 6
1A: Reinforcement Learning 1
- Sammie Katt, Frans A. Oliehoek, Christopher Amato:

Bayesian Reinforcement Learning in Factored POMDPs. 7-15 - Jiang Rong, Tao Qin, Bo An:

Competitive Bridge Bidding with Deep Neural Networks. 16-24 - Sanmit Narvekar, Peter Stone:

Learning Curriculum Policies for Reinforcement Learning. 25-33 - Bohan Wu, Jayesh K. Gupta, Mykel J. Kochenderfer:

Model Primitive Hierarchical Lifelong Reinforcement Learning. 34-42 - Gregory Palmer, Rahul Savani, Karl Tuyls:

Negative Update Intervals in Deep Multi-Agent Reinforcement Learning. 43-51 - Yang Liu, Yifeng Zeng, Yingke Chen, Jing Tang, Yinghui Pan:

Self-Improving Generative Adversarial Reinforcement Learning. 52-60
1B: Socially Intelligent Agents 1
- Mike Ligthart, Timo Fernhout, Mark A. Neerincx, Kelly L. A. van Bindsbergen, Martha A. Grootenhuis, Koen V. Hindriks:

A Child and a Robot Getting Acquainted - Interaction Design for Eliciting Self-Disclosure. 61-70 - Pooja Prajod, Mohammed Al Owayyed, Tim Rietveld, Jaap-Jan van der Steeg, Joost Broekens:

The Effect of Virtual Agent Warmth on Human-Agent Negotiation. 71-76 - O. Can Görür, Benjamin Rosman, Sahin Albayrak:

Anticipatory Bayesian Policy Selection for Online Adaptation of Collaborative Robots to Unknown Human Types. 77-85 - Hannes Ritschel, Ilhan Aslan, David Sedlbauer, Elisabeth André

:
Irony Man: Augmenting a Social Robot with the Ability to Use Irony in Multimodal Communication with Humans. 86-94 - Kim Baraka, Marta Couto, Francisco S. Melo, Manuela Veloso:

An Optimization Approach for Structured Agent-Based Provider/Receiver Tasks. 95-103 - Sepehr Janghorbani, Ashutosh Modi, Jakob Buhmann, Mubbasir Kapadia:

Domain Authoring Assistant for Intelligent Virtual Agent. 104-112
1C: Multi-Robot Systems
- Michael Amir, Alfred M. Bruckstein:

Minimizing Travel in the Uniform Dispersal Problem for Robotic Sensors. 113-121 - Rui Liu, Fan Jia, Wenhao Luo, Meghan Chandarana, Changjoo Nam, Michael Lewis, Katia P. Sycara:

Trust-Aware Behavior Reflection for Robot Swarm Self-Healing. 122-130 - Florence Ho, Ana Salta, Rúben Geraldes, Artur Goncalves, Marc Cavazza, Helmut Prendinger:

Multi-Agent Path Finding for UAV Traffic Management. 131-139 - Pierre Thalamy, Benoît Piranda, Julien Bourgeois:

Distributed Self-Reconfiguration using a Deterministic Autonomous Scaffolding Structure. 140-148 - Yinon Douchan, Ran Wolf, Gal A. Kaminka:

Swarms Can be Rational. 149-157 - Ebtehal Turki Saho Alotaibi:

A Complete Multi-Robot Path-Planning Algorithm: JAAMAS Track. 158-160
1D: Verification and Validation
- Alessio Lomuscio, Edoardo Pirovano

:
A Counter Abstraction Technique for the Verification of Probabilistic Swarm Systems. 161-169 - Natasha Alechina, Mehdi Dastani, Brian Logan:

Decidable Model Checking with Uniform Strategies. 170-178 - Panagiotis Kouvaros, Alessio Lomuscio, Edoardo Pirovano

, Hashan Punchihewa:
Formal Verification of Open Multi-Agent Systems. 179-187 - Giuseppe Perelli:

Enforcing Equilibria in Multi-Agent Systems. 188-196 - Damian Kurpiewski

, Michal Knapik
, Wojciech Jamroga:
On Domination and Control in Strategic Ability. 197-205 - Francesco Belardinelli

, Stéphane Demri:
Resource-bounded ATL: the Quest for Tractable Fragments. 206-214
1E: Economic Paradigms: Learning and Adaptation
- Weiran Shen, Pingzhong Tang, Song Zuo:

Automated Mechanism Design via Neural Networks. 215-223 - Michal Sustr, Vojtech Kovarík, Viliam Lisý:

Monte Carlo Continual Resolving for Online Strategy Computation in Imperfect Information Games. 224-232 - James P. Bailey, Georgios Piliouras:

Multi-Agent Learning in Network Zero-Sum Games is a Hamiltonian System. 233-241 - Yasser F. O. Mohammad, Shinji Nakadai:

Optimal Value of Information Based Elicitation During Negotiation. 242-250 - Jayakumar Subramanian, Aditya Mahajan:

Reinforcement Learning in Stationary Mean-field Games. 251-259 - Jasper Bakker, Aron Hammond, Daan Bloembergen, Tim Baarslag:

RLBOA: A Modular Reinforcement Learning Framework for Autonomous Negotiating Agents. 260-268
1F: Agent Societies and Societal Issues 1
- Jason Xu, Julián García, Toby Handfield:

Cooperation with Bottom-up Reputation Dynamics. 269-276 - Yi Yang, Quan Bai, Qing Liu:

Dynamic Source Weight Computation for Truth Inference over Data Streams. 277-285 - Nanda Kishore Sreenivas, Shrisha Rao:

Egocentric Bias and Doubt in Cognitive Agents. 286-295 - Fan Yang, Bo Liu, Wen Dong:

Optimal Control of Complex Systems through Variational Inference with a Discrete Event Decision Process. 296-304 - Kai Zhou, Tomasz P. Michalak, Marcin Waniek, Talal Rahwan, Yevgeniy Vorobeychik:

Attacking Similarity-Based Link Prediction in Social Networks. 305-313 - Sixie Yu, Yevgeniy Vorobeychik:

Removing Malicious Nodes from Networks. 314-322
2A: Reinforcement Learning 2
- Yuxiang Yang, Ken Caluwaerts, Atil Iscen, Jie Tan, Chelsea Finn:

NoRML: No-Reward Meta Learning. 323-331 - Banafsheh Rafiee, Sina Ghiassian, Adam White, Richard S. Sutton:

Prediction in Intelligence: An Empirical Comparison of Off-policy Algorithms on Robots. 332-340 - Chao Yu, Xin Wang, Jianye Hao, Zhanbo Feng:

Reinforcement Learning for Cooperative Overtaking. 341-349 - Richard Klíma, Daan Bloembergen, Michael Kaisers, Karl Tuyls:

Robust Temporal Difference Learning for Critical Domains. 350-358 - Changjian Li, Krzysztof Czarnecki:

Urban Driving with Multi-Objective Deep Reinforcement Learning. 359-367 - Xinlei Pan, Weiyao Wang, Xiaoshuai Zhang, Bo Li, Jinfeng Yi, Dawn Song:

How You Act Tells a Lot: Privacy-Leaking Attack on Deep Reinforcement Learning. 368-376
2B: Practicial Applications of Game Theory
- Haris Aziz

, Serge Gaspers, Zhaohong Sun, Toby Walsh:
From Matching with Diversity Constraints to Matching with Regional Quotas. 377-385 - David Mguni, Joel Jennings, Emilio Sison, Sergio Valcarcel Macua, Sofia Ceppi, Enrique Munoz de Cote:

Coordinating the Crowd: Inducing Desirable Equilibria in Non-Cooperative Systems. 386-394 - Shahrzad Gholami, Amulya Yadav, Long Tran-Thanh, Bistra Dilkina, Milind Tambe:

Don't Put All Your Strategies in One Basket: Playing Green Security Games with Imperfect Prior Knowledge. 395-403 - Chenxi Qiu, Anna Cinzia Squicciarini, Benjamin V. Hanrahan:

Incentivizing Distributive Fairness for Crowdsourcing Workers. 404-412 - Péter Biró, Walter Kern, Dömötör Pálvölgyi, Daniël Paulusma:

Generalized Matching Games for International Kidney Exchange. 413-421 - Hongyao Ma, Reshef Meir, David C. Parkes, James Y. Zou:

Contingent Payment Mechanisms for Resource Utilization. 422-430
2C: Knowledge Representation and Reasoning
- Andrew Perrault, Craig Boutilier:

Experiential Preference Elicitation for Autonomous Heating and Cooling Systems. 431-439 - Peta Masters, Sebastian Sardiña:

Goal Recognition for Rational and Irrational Agents. 440-448 - Min He, Hongliang Guo:

Interleaved Q-Learning with Partially Coupled Training Process. 449-457 - Nikhil Bhargava, Brian C. Williams:

Multiagent Disjunctive Temporal Networks. 458-466 - Luis Enrique Pineda, Shlomo Zilberstein:

Soft Labeling in Stochastic Shortest Path Problems. 467-475 - Atena M. Tabakhi, William Yeoh, Makoto Yokoo:

Parameterized Heuristics for Incomplete Weighted CSPs with Elicitation Costs. 476-484
2D: Social Choice Theory 1
- Luis Sánchez Fernández, Jesús A. Fisteus:

Monotonicity Axioms in Approval-based Multi-winner Voting Rules. 485-493 - Markus Brill, Piotr Faliszewski, Frank Sommer, Nimrod Talmon:

Approximation Algorithms for BalancedCC Multiwinner Rules. 494-502 - Aizhong Zhou, Yongjie Yang, Jiong Guo:

Parameterized Complexity of Committee Elections with Dichotomous and Trichotomous Votes. 503-510 - Sushmita Gupta, Pallavi Jain, Sanjukta Roy, Saket Saurabh, Meirav Zehavi:

Gehrlein Stability in Committee Selection: Parameterized Hardness and Algorithms. 511-519 - Felix Brandt, Johannes Hofbauer, Martin Strobel:

Exploring the No-Show Paradox for Condorcet Extensions Using Ehrhart Theory and Computer Simulations. 520-528 - Jasper Lu, David Kai Zhang, Zinovi Rabinovich, Svetlana Obraztsova, Yevgeniy Vorobeychik:

Manipulating Elections by Selecting Issues. 529-537
2E: Game Theory 1
- Gabriel Istrate, Cosmin Bonchis, Alin Brîndusescu:

Attacking Power Indices by Manipulating Player Reliability. 538-546 - Kai Jin, Ce Jin, Zhaoquan Gu:

Cooperation via Codes in Restricted Hat Guessing Games. 547-555 - Arunesh Sinha, Michael P. Wellman:

Incentivizing Collaboration in a Competition. 556-564 - Robert Bredereck, Edith Elkind, Ayumi Igarashi:

Hedonic Diversity Games. 565-573 - Raffaello Carosi, Gianpiero Monaco, Luca Moscardelli:

Local Core Stability in Simple Symmetric Fractional Hedonic Games. 574-582 - Naoyuki Kamiyama:

Many-to-Many Stable Matchings with Ties, Master Preference Lists, and Matroid Constraints. 583-591
2F: Agent Societies and Societal Issues 2
- Vahid Yazdanpanah, Mehdi Dastani, Wojciech Jamroga, Natasha Alechina, Brian Logan:

Strategic Responsibility Under Imperfect Information. 592-600 - Candice Schumann, Samsara N. Counts, Jeffrey S. Foster, John P. Dickerson:

The Diverse Cohort Selection Problem. 601-609 - Nicolas De Bufala, Jean-Daniel Kant:

An Evolutionary Approach to Find Optimal Policies with an Agent-Based Simulation. 610-618 - Jie Gao, Grant Schoenebeck

, Fang-Yi Yu
:
The Volatility of Weak Ties: Co-evolution of Selection and Influence in Social Networks. 619-627 - Palash Dey, Sourav Medya:

Covert Networks: How Hard is It to Hide? 628-637 - Ferdinando Fioretto, Pascal Van Hentenryck:

Privacy-Preserving Federated Data Sharing. 638-646
3A: Learning and Adaptation
- Riccardo Sartea, Alessandro Farinelli, Matteo Murari:

Agent Behavioral Analysis Based on Absorbing Markov Chains. 647-655 - Oscar Chang, Robert Kwiatkowski, Siyuan Chen, Hod Lipson:

Agent Embeddings: A Latent Representation for Pole-Balancing Networks. 656-664 - Panayiotis Danassis, Boi Faltings:

Courtesy as a Means to Coordinate. 665-673 - Rohith Dwarakanath Vallam, Sarthak Ahuja, Surya Shravan Kumar Sajja, Ritwik Chaudhuri, Rakesh Pimplikar, Kushal Mukherjee, Ramasuri Narayanam, Gyana R. Parija:

Dynamic Particle Allocation to Solve Interactive POMDP Models for Social Decision Making. 674-682 - Jane X. Wang, Edward Hughes, Chrisantha Fernando, Wojciech M. Czarnecki, Edgar A. Duéñez-Guzmán, Joel Z. Leibo:

Evolving Intrinsic Motivations for Altruistic Behavior. 683-692 - Ryan Lowe, Jakob N. Foerster, Y-Lan Boureau, Joelle Pineau, Yann N. Dauphin:

On the Pitfalls of Measuring Emergent Communication. 693-701
3B: Socially Intelligent Agents 2
- Gabriel Castillo, Michael Neff:

What do we express without knowing?: Emotion in Gesture. 702-710 - Yaqian Zhang, Wooi-Boon Goh:

Bootstrapped Policy Gradient for Difficulty Adaptation in Intelligent Tutoring Systems. 711-719 - Samantha Krening, Karen M. Feigh:

Newtonian Action Advice: Integrating Human Verbal Instruction with Reinforcement Learning. 720-727 - Taylor Kessler Faulkner, Reymundo A. Gutierrez, Elaine Schaertl Short, Guy Hoffman, Andrea Lockerd Thomaz:

Active Attention-Modified Policy Shaping: Socially Interactive Agents Track. 728-736 - Kallirroi Georgila, Mark G. Core, Benjamin D. Nye, Shamya Karumbaiah, Daniel Auerbach, Maya Ram:

Using Reinforcement Learning to Optimize the Policies of an Intelligent Tutoring System for Interpersonal Skills Training. 737-745 - Jize Chen, Changhong Wang:

Reaching Cooperation using Emerging Empathy and Counter-empathy. 746-753
3C: Engineering Multiagent Systems 1
- Buster A. Bernstein, Jasper C. M. Geurtz, Vincent J. Koeman:

Evaluating the Effectiveness of Multi-Agent Organisational Paradigms in a Real-Time Strategy Environment: Engineering Multiagent Systems Track. 754-762 - Mohammad Al-Zinati, Rym Wenkstern:

Agent-Environment Interactions in Large-Scale Multi-Agent Based Simulation Systems. 763-771 - Sandra Garcia-Rodriguez, Jorge J. Gómez-Sanz:

Robust Decentralised Agent Based Approach for Microgrid Energy Management. 772-780 - Akin Günay, Amit K. Chopra, Munindar P. Singh:

Supple: Multiagent Communication Protocols with Causal Types. 781-789 - Alessandro Ricci, Andrei Ciortea, Simon Mayer, Olivier Boissier, Rafael H. Bordini, Jomi Fred Hübner:

Engineering Scalable Distributed Environments and Organizations for MAS. 790-798 - Rafael C. Cardoso, Rafael H. Bordini:

Decentralised Planning for Multi-Agent Programming Platforms. 799-807
3D: Social Choice Theory 2
- Robert Bredereck, Junjie Luo:

Complexity of Manipulation in Premise-Based Judgment Aggregation with Simple Formulas. 819-827 - Sirin Botan, Umberto Grandi, Laurent Perrussel:

Multi-Issue Opinion Diffusion under Constraints. 828-836 - Hadi Hosseini, Kate Larson:

Multiple Assignment Problems under Lexicographic Preferences. 837-845 - Gábor Erdélyi, Christian Reger, Yongjie Yang:

Towards Completing the Puzzle: Solving Open Problems for Control in Elections. 846-854 - Palash Dey, Swaprava Nath, Garima Shakya:

Testing Preferential Domains Using Sampling. 855-863 - Jingyan Wang, Nihar B. Shah:

Your 2 is My 1, Your 3 is My 9: Handling Arbitrary Miscalibrations in Ratings. 864-872
3E: Game Theory 2
- Gianpiero Monaco, Luca Moscardelli, Yllka Velaj:

On the Performance of Stable Outcomes in Modified Fractional Hedonic Games with Egalitarian Social Welfare. 873-881 - Hendrik Fichtenberger, Amer Krivosija, Anja Rey:

Testing Individual-Based Stability Properties in Graphical Hedonic Games. 882-890 - Anna Maria Kerkmann, Jörg Rothe:

Stability in FEN-Hedonic Games for Single-Player Deviations. 891-899 - Aurélie Beynier, Sylvain Bouveret, Michel Lemaître, Nicolas Maudet, Simon Rey, Parham Shams:

Efficiency, Sequenceability and Deal-Optimality in Fair Division of Indivisible Goods. 900-908 - Andrea Celli, Stefano Coniglio, Nicola Gatti:

Computing Optimal Ex Ante Correlated Equilibria in Two-Player Sequential Games. 909-917 - Yossi Azar, Allan Borodin, Michal Feldman, Amos Fiat, Kineret Segal:

Efficient Allocation of Free Stuff. 918-925
3F: Logics for Agents
- Christoph Schwering, Maurice Pagnucco:

A Representation Theorem for Reasoning in First-Order Multi-Agent Knowledge Bases. 926-934 - Xinliang Song, Tonghan Wang, Chongjie Zhang:

Convergence of Multi-Agent Learning with a Finite Step Size in General-Sum Games. 935-943 - Emiliano Lorini, Fabián Romero:

Decision Procedures for Epistemic Logic Exploiting Belief Bases. 944-952 - Tim French, Rustam Galimullin, Hans van Ditmarsch, Natasha Alechina:

Groups Versus Coalitions: On the Relative Expressivity of GAL and CAL. 953-961 - Wojciech Jamroga, Vadim Malvone, Aniello Murano

:
Natural Strategic Ability under Imperfect Information. 962-970 - Aurèle Barrière, Bastien Maubert, Aniello Murano

, Sasha Rubin:
Reasoning about Changes of Observational Power in Logics of Knowledge and Time. 971-979
4A: Learning Agent Capabilities
- Xihan Li, Jia Zhang, Jiang Bian, Yunhai Tong, Tie-Yan Liu:

A Cooperative Multi-Agent Reinforcement Learning Framework for Resource Balancing in Complex Logistics Network. 980-988 - Siyuan Li, Fangda Gu, Guangxiang Zhu, Chongjie Zhang:

Context-Aware Policy Reuse. 989-997 - Giuseppe Cuccu, Julian Togelius, Philippe Cudré-Mauroux:

Playing Atari with Six Neurons. 998-1006 - Tong Mu, Karan Goel, Emma Brunskill:

PLOTS: Procedure Learning from Observations using subTask Structure. 1007-1015 - Josiah P. Hanna, Peter Stone:

Reducing Sampling Error in Policy Gradient Learning. 1016-1024 - Longxiang Shi, Shijian Li, Longbing Cao, Long Yang, Gang Pan:

TBQ(σ): Improving Efficiency of Trace Utilization for Off-Policy Reinforcement Learning. 1025-1032
4B: Multimodal Interaction
- Prashan Madumal, Tim Miller, Liz Sonenberg, Frank Vetere:

A Grounded Interaction Protocol for Explainable Artificial Intelligence. 1033-1041 - Akshat Agarwal, Swaminathan Gurumurthy, Vasu Sharma, Mike Lewis, Katia P. Sycara:

Community Regularization of Visually-Grounded Dialog. 1042-1050 - Kathrin Janowski, Elisabeth André

:
What If I Speak Now?: A Decision-Theoretic Approach to Personality-Based Turn-Taking. 1051-1059 - Dan Feng, Elín Carstensdóttir, Magy Seif El-Nasr, Stacy Marsella:

Exploring Improvisational Approaches to Social Knowledge Acquisition. 1060-1068 - Julie Porteous, Alan Lindsay:

Protagonist vs Antagonist PROVANT: Narrative Generation as Counter Planning. 1069-1077 - Sule Anjomshoae, Amro Najjar, Davide Calvaresi, Kary Främling:

Explainable Agents and Robots: Results from a Systematic Literature Review. 1078-1088
4C: Deep Learning
- Hyun-Rok Lee, Taesik Lee:

Improved Cooperative Multi-agent Reinforcement Learning Algorithm Augmented by Mixing Demonstrations from Centralized Policy. 1089-1098 - Joel Z. Leibo, Julien Pérolat, Edward Hughes, Steven Wheelwright, Adam H. Marblestone, Edgar A. Duéñez-Guzmán, Peter Sunehag, Iain Dunning, Thore Graepel:

Malthusian Reinforcement Learning. 1099-1107 - Hangyu Mao, Zhengchao Zhang, Zhen Xiao, Zhibo Gong:

Modelling the Dynamic Joint Policy of Teammates with Attention Multi-agent DDPG. 1108-1116 - Diana Borsa, Nicolas Heess, Bilal Piot, Siqi Liu, Leonard Hasenclever, Rémi Munos, Olivier Pietquin:

Observational Learning by Reinforcement Learning. 1117-1124 - Ondrej Biza, Robert Platt Jr.:

Online Abstraction with MDP Homomorphisms for Deep Learning. 1125-1133 - Dylan Banarse, Yoram Bachrach, Siqi Liu, Guy Lever, Nicolas Heess, Chrisantha Fernando, Pushmeet Kohli, Thore Graepel:

The Body is Not a Given: Joint Agent Policy Learning and Morphology Evolution. 1134-1142
4D: Robotics
- Mikko Lauri, Joni Pajarinen, Jan Peters:

Information Gathering in Decentralized POMDPs by Policy Graph Improvement. 1143-1151 - Minghua Liu, Hang Ma, Jiaoyang Li, Sven Koenig:

Task and Path Planning for Multi-Agent Pickup and Delivery. 1152-1160 - Benjamin Schnieders, Shan Luo, Gregory Palmer, Karl Tuyls:

Fully Convolutional One-Shot Object Segmentation for Industrial Robotics. 1161-1169 - Saurabh Arora, Prashant Doshi, Bikramjit Banerjee:

Online Inverse Reinforcement Learning Under Occlusion. 1170-1178 - Hao-Tsung Yang, Shih-Yu Tsai, Kin Sum Liu, Shan Lin, Jie Gao:

Patrol Scheduling Against Adversaries with Varying Attack Durations. 1179-1188 - Gokarna Sharma, Ayan Dutta, Jong-Hoon Kim:

Optimal Online Coverage Path Planning with Energy Constraints. 1189-1197
4E: Game Theory 3
- Julian Gutierrez, Sarit Kraus, Michael J. Wooldridge:

Cooperative Concurrent Games. 1198-1206 - Vincenzo Auletta, Diodato Ferraioli, Valeria Fionda, Gianluigi Greco:

Maximizing the Spread of an Opinion when Tertium Datur Est. 1207-1215 - Erel Segal-Halevi, Shani Alkoby, Tomer Sharbaf, David Sarne:

Obtaining Costly Unverifiable Valuations from a Single Agent. 1216-1224 - Yun Kuen Cheung, Martin Hoefer, Paresh Nakhe:

Tracing Equilibrium in Dynamic Markets via Distributed Adaptation. 1225-1233 - Paolo Serafino, Carmine Ventre, Angelina Vidali:

Truthfulness on a Budget: Trading Money for Approximation through Monitoring. 1234-1242 - Bo Li, Minming Li, Xiaowei Wu:

Well-behaved Online Load Balancing Against Strategic Jobs. 1243-1251
4F: Communication and Argumentation 1
- Yannis Dimopoulos, Jean-Guy Mailly

, Pavlos Moraitis:
Argumentation-based Negotiation with Incomplete Opponent Profiles. 1252-1260 - Oana Cocarascu, Antonio Rago, Francesca Toni:

Extracting Dialogical Explanations for Review Aggregations with Argumentative Dialogical Agents. 1261-1269 - Leila Amgoud, Dragan Doder:

Gradual Semantics Accounting for Varied-Strength Attacks. 1270-1278 - Yakoub Salhi:

On an Argument-centric Persuasion Framework. 1279-1287 - Manel Ayadi, Nahla Ben Amor, Jérôme Lang, Dominik Peters:

Single Transferable Vote: Incomplete Knowledge and Communication Issues. 1288-1296 - Mattias Appelgren, Alex Lascarides:

Learning Plans by Acquiring Grounded Linguistic Meanings from Corrections. 1297-1305
5A: Learning Agents
- Yu Wang, Yue Deng, Yilin Shen, Hongxia Jin:

A New Concept of Convex based Multiple Neural Networks Structure. 1306-1314 - Xiaotian Hao, Weixun Wang, Jianye Hao, Yaodong Yang:

Independent Generative Adversarial Self-Imitation Learning in Cooperative Multiagent Systems. 1315-1323 - Wei Tang, Chien-Ju Ho:

Bandit Learning with Biased Human Feedback. 1324-1332 - Mason Bretan, Sageev Oore, Siddharth Sanan, Larry P. Heck:

Robot Learning by Collaborative Network Training: A Self-Supervised Method using Ranking. 1333-1340
5B: Human-Robot interaction
- Daniel Angelov, Yordan Hristov, Subramanian Ramamoorthy:

Using Causal Analysis to Learn Specifications from Task Demonstrations. 1341-1349 - Tesca Fitzgerald, Elaine Short, Ashok K. Goel, Andrea Thomaz:

Human-guided Trajectory Adaptation for Tool Transfer. 1350-1358 - S. M. al Mahi, Kyungho Nam, Christopher Crick:

Distributed Heterogeneous Robot-Human Teams: Robotics Track. 1359-1367 - Sanket Gaurav, Brian D. Ziebart:

Discriminatively Learning Inverse Optimal Control Models for Predicting Human Intentions. 1368-1376
5C: Industrial Applications Track
- Zehong Hu, Zhen Wang, Zhao Li, Shichang Hu, Shasha Ruan, Jie Zhang:

Fraud Regulating Policy for E-Commerce via Constrained Contextual Bandits. 1377-1385 - Lu Duan, Haoyuan Hu, Yu Qian, Yu Gong, Xiaodong Zhang, Jiangwen Wei, Yinghui Xu:

A Multi-task Selected Learning Approach for Solving 3D Flexible Bin Packing Problem. 1386-1394 - Yujie Chen, Yu Qian, Yichen Yao, Zili Wu, Rongqi Li, Yinzhi Zhou, Haoyuan Hu, Yinghui Xu:

Can Sophisticated Dispatching Strategy Acquired by Reinforcement Learning? 1395-1403 - Sebastien Blandin, Laura Wynter, Hasan Poonawala, Sean Laguna, Basile Dura:

FASTER: Fusion AnalyticS for public Transport Event Response. 1404-1412
5D: Social Choice Theory 3
- Takehiro Ito, Naoyuki Kamiyama, Yusuke Kobayashi, Yoshio Okamoto:

Algorithms for Gerrymandering over Graphs. 1413-1421 - Roy Fairstein, Adam Lauz, Reshef Meir, Kobi Gal:

Modeling People's Voting Behavior with Poll Information. 1422-1430 - Justin Kruger, Sebastian Schneckenburger:

Fall if it Lifts your Teammate: A Novel Type of Candidate Manipulation. 1431-1439 - Yongjie Yang, Dinko Dimitrov:

How Hard Is It to Control a Group? 1440-1442
5E: Auctions and Mechanism Design
- Minming Li, Lili Mei, Yi Xu, Guochuan Zhang, Yingchao Zhao:

Facility Location Games with Externalities. 1443-1451 - Ilan Nehama

, Taiki Todo, Makoto Yokoo:
Manipulations-resistant Facility Location Mechanisms for ZV-line Graphs. 1452-1460 - Lingjie Duan, Bo Li, Minming Li, Xinping Xu:

Heterogeneous Two-facility Location Games with Minimum Distance Requirement. 1461-1469 - Xujin Chen, Minming Li, Changjun Wang, Chenhao Wang, Yingchao Zhao:

Truthful Mechanisms for Location Games of Dual-Role Facilities. 1470-1478
5F: Agent Cooperation 1
- Isaac Vandermeulen, Roderich Groß, Andreas Kolling:

Balanced Task Allocation by Partitioning the Multiple Traveling Salesperson Problem. 1479-1487 - Wenhao Luo, Changjoo Nam, George Kantor, Katia P. Sycara:

Distributed Environmental Modeling and Adaptive Sampling for Multi-Robot Sensor Coverage. 1488-1496 - Arambam James Singh, Akshat Kumar:

Graph Based Optimization for Multiagent Cooperation. 1497-1505 - Yanchen Deng, Ziyu Chen, Dingding Chen, Xingqiong Jiang, Qiang Li:

PT-ISABB: A Hybrid Tree-based Complete Algorithm to Solve Asymmetric Distributed Constraint Optimization Problems. 1506-1514
5G: Networks
- Chen Hajaj, Sixie Yu, Zlatko Joveski, Yifan Guo, Yevgeniy Vorobeychik:

Adversarial Coordination on Social Networks. 1515-1523 - Dominic Aits, Alexander Carver, Paolo Turrini:

Group Segregation in Social Networks. 1524-1532 - Mohammad Rashedul Hasan, Anita Raja, Ana L. C. Bazzan:

A Context-aware Convention Formation Framework for Large-Scale Networks. 1533-1535 - Pablo Pico-Valencia, Juan A. Holgado-Terriza, José A. Senso:

An Agent Model Based on Open Linked Data for Building Internet of Agents Ecosystems. 1536-1538
6A: Agent-Based Simulation
- Guni Sharon, Stephen D. Boyles, Shani Alkoby, Peter Stone:

Marginal Cost Pricing with a Fixed Error Factor in Traffic Networks. 1539-1546 - Giulio Bacchiani, Daniele Molinari, Marco Patander:

Microscopic Traffic Simulation by Cooperative Multi-agent Deep Reinforcement Learning. 1547-1555 - Fernando P. Santos, Samuel Francisco Mascarenhas, Francisco C. Santos, Filipa Correia, Samuel Gomes, Ana Paiva:

Outcome-based Partner Selection in Collective Risk Dilemmas. 1556-1564 - Kyriakos Polymenakos, Alessandro Abate, Stephen J. Roberts:

Safe Policy Search Using Gaussian Process Models. 1565-1573
6B: Auctions and Mechanism Design
- Maria Kyropoulou, Carmine Ventre:

Obviously Strategyproof Mechanisms without Money for Scheduling. 1574-1581 - Yingkai Li, Pinyan Lu, Haoran Ye:

Revenue Maximization with Imprecise Distribution. 1582-1590 - Weiran Shen, Pingzhong Tang, Yulong Zeng:

Buyer Signaling Games in Auctions. 1591-1599 - Georgios Methenitis, Michael Kaisers, Han La Poutré:

Forecast-Based Mechanisms for Demand Response. 1600-1608
6C: Engineering Multiagent Systems 2
- Davide Dell'Anna, Mehdi Dastani, Fabiano Dalpiaz:

Runtime Revision of Norms and Sanctions based on Agent Preferences. 1609-1617 - Giorgio Audrito, Sergio Bergamini, Ferruccio Damiani, Mirko Viroli:

Effective Collective Summarisation of Distributed Data in Mobile Multi-Agent Systems. 1618-1626 - Andrew Silva, Sonia Chernova:

Unsupervised Role Discovery Using Temporal Observations of Agents. 1627-1634 - Parantapa Bhattacharya, Saliya Ekanayake, Chris J. Kuhlman, Christian Lebiere, Don Morrison, Samarth Swarup, Mandy L. Wilson, Mark G. Orr:

The Matrix: An Agent-Based Modeling Framework for Data Intensive Simulations. 1635-1643
6D: Blue Sky
- Robin Cohen, Mike Schaekermann, Sihao Liu, Michael Cormier:

Trusted AI and the Contribution of Trust Modeling in Multiagent Systems. 1644-1648 - Yazan Mualla, Amro Najjar, Stéphane Galland, Christophe Nicolle, Igor Haman Tchappi, Ansar-Ul-Haque Yasar, Kary Främling:

Between the Megalopolis and the Deep Blue Sky: Challenges of Transport with UAVs in Future Smart Cities. 1649-1653 - Budhitama Subagdja, Ah-Hwee Tan:

Beyond Autonomy: The Self and Life of Social Agents. 1654-1658 - Andrei Ciortea, Simon Mayer, Fabien Gandon, Olivier Boissier, Alessandro Ricci, Antoine Zimmermann:

A Decade in Hindsight: The Missing Bridge Between Multi-Agent Systems and the World Wide Web. 1659-1663 - Riccardo Tommasini, Davide Calvaresi, Jean-Paul Calbimonte:

Stream Reasoning Agents: Blue Sky Ideas Track. 1664-1680
6E: Agent Cooperation 2
- John P. Dickerson, Karthik Abinav Sankararaman, Kanthi Kiran Sarpatwar, Aravind Srinivasan, Kun-Lung Wu, Pan Xu:

Online Resource Allocation with Matching Constraints. 1681-1689 - Matteo Baldoni, Cristina Baroglio, Federico Capuzzimati, Roberto Micalizio:

Type Checking for Protocol Role Enactments via Commitments. 1690-1692 - Jun Wu, Yuan Zhang, Yu Qiao, Lei Zhang, Chongjun Wang, Junyuan Xie:

Multi-unit Budget Feasible Mechanisms for Cellular Traffic Offloading. 1693-1701 - Shaheen Fatima, Michael J. Wooldridge:

Computing Optimal Coalition Structures in Polynomial Time. 1702-1703
6F: Communication and Argumentation 2
- Jesse Heyninck, Christian Straßer:

A Fully Rational Argumentation System for Preordered Defeasible Rules. 1704-1712 - Amin Karamlou, Kristijonas Cyras, Francesca Toni:

Complexity Results and Algorithms for Bipolar Argumentation. 1713-1721 - Nico Potyka:

Extending Modular Semantics for Bipolar Weighted Argumentation. 1722-1730 - Kristijonas Cyras, Tiago Oliveira:

Resolving Conflicts in Clinical Guidelines using Argumentation. 1731-1739
6G: Planning & Learning
- Dorin Shmaryahu, Jörg Hoffmann, Guy Shani:

Comparative Criteria for Partially Observable Contingent Planning. 1740-1742 - Bo Yang, Min Liu:

Attack-Resilient Connectivity Game for UAV Networks using Generative Adversarial Learning. 1743-1751 - Jen Jen Chung, Damjan Miklic, Lorenzo Sabattini, Kagan Tumer, Roland Siegwart:

The Impact of Agent Definitions and Interactions on Multiagent Learning for Coordination. 1752-1760 - Josefina Sierra-Santibáñez:

An Agent-Based Model of the Emergence and Evolution of a Language System for Boolean Coordination: JAAMAS Track. 1761-1763
Extended Abstracts
- João Paulo Aires, Roger Granada, Juarez Monteiro, Rodrigo Coelho Barros, Felipe Meneguzzi:

Classification of Contractual Conflicts via Learning of Semantic Representations. 1764-1766 - Abdullah Cihan Ak, Arda Inceoglu, Sanem Sariel:

When to Stop for Safe Manipulation in Unstructured Environments? 1767-1769 - Dario Albani, Wolfgang Hönig, Nora Ayanian, Daniele Nardi, Vito Trianni:

Summary: Distributed Task Assignment and Path Planning with Limited Communication for Robot Teams. 1770-1772 - Shani Alkoby, Avilash Rath, Peter Stone:

Teaching Social Behavior through Human Reinforcement for Ad hoc Teamwork - The STAR Framework: Extended Abstract. 1773-1775 - Nicolas Anastassacos, Mirco Musolesi:

Towards Decentralized Reinforcement Learning Architectures for Social Dilemmas. 1776-1777 - Enrique Areyan Viqueira, Amy Greenwald, Cyrus Cousins, Eli Upfal:

Learning Simulation-Based Games from Data. 1778-1780 - Priscilla Avegliano, Jaime Simão Sichman:

Using Surrogate Models to Calibrate Agent-based Model Parameters Under Data Scarcity. 1781-1783 - Amos Azaria, Keren Nivasch:

The Multimodal Correction Detection Problem. 1784-1786 - Haris Aziz

, Hau Chan, Bo Li:
Maxmin Share Fair Allocation of Indivisible Chores to Asymmetric Agents. 1787-1789 - Quentin Baert, Anne-Cécile Caron, Maxime Morge, Jean-Christophe Routier, Kostas Stathis:

Adaptive Multi-agent System for Situated Task Allocation. 1790-1792 - David Balaban, John Cooper, Erik Komendera:

Inverse Kinematics and Sensitivity Minimization of an n-Stack Stewart Platform. 1793-1795 - Matteo Baldoni, Cristina Baroglio, Olivier Boissier, Roberto Micalizio, Stefano Tedeschi:

Engineering Business Processes through Accountability and Agents. 1796-1798 - Jacopo Banfi, Mark E. Campbell:

High-Level Path Planning in Hostile Dynamic Environments. 1799-1801 - Souvik Barat, Harshad Khadilkar, Hardik Meisheri, Vinay Kulkarni, Vinita Baniwal, Prashant Kumar, Monika Gajrani:

Actor Based Simulation for Closed Loop Control of Supply Chain using Reinforcement Learning. 1802-1804 - Elaheh Barati, Xuewen Chen, Zichun Zhong:

Attention-based Deep Reinforcement Learning for Multi-view Environments. 1805-1807 - Mika Barkan, Gal A. Kaminka:

Towards Predictive Execution Monitoring in BDI Recipes. 1808-1810 - Siddharth Barman, Ganesh Ghalme, Shweta Jain, Pooja Kulkarni, Shivika Narang:

Fair Division of Indivisible Goods Among Strategic Agents. 1811-1813 - Dorothea Baumeister, Tobias Hogrebe:

Manipulative Design of Scoring Systems. 1814-1816 - Francesco Belardinelli

, Umberto Grandi:
A Social Choice Theoretic Perspective on Database Aggregation. 1817-1819 - Francesco Belardinelli

, Ioana Boureanu, Catalin Dima, Vadim Malvone:
Verifying Strategic Abilities in Multi-agent Systems with Private Data-Sharing. 1820-1822 - Clara Benac Earle, Lars-Åke Fredlund:

A Property-based Testing Framework for Multi-Agent Systems. 1823-1825 - Sushrut Bhalla, Sriram Ganapathi Subramanian, Mark Crowley:

Training Cooperative Agents for Multi-Agent Reinforcement Learning. 1826-1828 - Fan Bi, Sebastian Stein, Enrico H. Gerding, Nick R. Jennings, Tom La Porta:

A Truthful Online Mechanism for Allocating Fog Computing Resources. 1829-1831 - Arpita Biswas, Suvam Mukherjee:

Fairness Through the Lens of Proportional Equality. 1832-1834 - James Blythe, Emilio Ferrara, Di Huang, Kristina Lerman, Goran Muric, Anna Sapienza, Alexey Tregubov, Diogo Pacheco, John Bollenbacher, Alessandro Flammini, Pik-Mai Hui, Filippo Menczer:

The DARPA SocialSim Challenge: Massive Multi-Agent Simulations of the Github Ecosystem. 1835-1837 - Elizabeth Bondi, Hoon Oh, Haifeng Xu, Fei Fang, Bistra Dilkina, Milind Tambe:

Broken Signals in Security Games: Coordinating Patrollers and Sensors in the Real World. 1838-1840 - Valentin Bouziat, Xavier Pucel, Stéphanie Roussel, Louise Travé-Massuyès:

Preference-Based Fault Estimation in Autonomous Robots: Incompleteness and Meta-Diagnosis. 1841-1843 - Ronen I. Brafman, Giuseppe De Giacomo:

Regular Decision Processes: Modelling Dynamic Systems without Using Hidden Variables. 1844-1846 - Angelina Brilliantova, Anton Pletenev, Hadi Hosseini:

The Rise and Fall of Complex Family Structures: Coalition Formation, Stability, and Power Struggle. 1847-1849 - Cédric L. R. Buron, Zahia Guessoum, Sylvain Ductor:

MCTS-based Automated Negotiation Agent. 1850-1852 - Grace Cai, Don Sofge:

An Urgency-Dependent Quorum Sensing Algorithm for N-Site Selection in Autonomous Swarms. 1853-1855 - Logan Carlson, Dalton Navalta, Monica N. Nicolescu, Mircea Nicolescu, Gail Woodward:

Multinomial HMMs for Intent Recognition in Maritime Domains. 1856-1858 - Thomas Carr, Maria Chli, George Vogiatzis:

Domain Adaptation for Reinforcement Learning on the Atari. 1859-1861 - Jacopo Castellini, Frans A. Oliehoek, Rahul Savani, Shimon Whiteson:

The Representational Capacity of Action-Value Networks for Multi-Agent Reinforcement Learning. 1862-1864 - Jim Martin Catacora Ocana, Francesco Riccio, Roberto Capobianco, Daniele Nardi:

Cooperative Multi-Agent Deep Reinforcement Learning in Soccer Domains. 1865-1867 - Andrea Celli, Giulia Romano, Nicola Gatti:

Personality-Based Representations of Imperfect-Recall Games. 1868-1870 - Hau Chan, Jing Chen, Bo Li, Xiaowei Wu:

Maximin-Aware Allocations of Indivisible Goods. 1871-1873 - Tristan Charrier, Arthur Queffelec, Ocan Sankur, François Schwarzentruber:

Reachability and Coverage Planning for Connected Agents. 1874-1876 - Ritwik Chaudhuri, Kushal Mukherjee, Ramasuri Narayanam, Rohith Dwarakanath Vallam, Ayush Kumar, Antriksh Mathur, Shweta Garg, Sudhanshu Singh, Gyana R. Parija:

Collaborative Reinforcement Learning Model for Sustainability of Cooperation in Sequential Social Dilemmas. 1877-1879 - Xiong-Hui Chen, Yang Yu:

Reinforcement Learning with Derivative-Free Exploration. 1880-1882 - Safa Cicek, Alireza Nakhaei, Stefano Soatto, Kikuo Fujimura:

MARL-PPS: Multi-agent Reinforcement Learning with Periodic Parameter Sharing. 1883-1885 - Jonathan Cohen, Abdel-Illah Mouaddib:

Power Indices for Team Reformation Planning Under Uncertainty. 1886-1888 - Joe Collenette, Katie Atkinson, Daan Bloembergen, Karl Tuyls:

Stability of Human-Inspired Agent Societies. 1889-1891 - Sarah Cooney, Phebe Vayanos, Thanh Hong Nguyen, Cleotilde Gonzalez, Christian Lebiere, Edward A. Cranford, Milind Tambe:

Warning Time: Optimizing Strategic Signaling for Security Against Boundedly Rational Adversaries. 1892-1894 - Federico Corò, Emilio Cruciani, Gianlorenzo D'Angelo, Stefano Ponziani:

Vote For Me!: Election Control via Social Influence in Arbitrary Scoring Rule Voting Systems. 1895-1897 - Jacob W. Crandall, Huy Pham:

Cooperating in Long-term Relationships with Time-Varying Structure. 1898-1900 - Stephen Cranefield, Frank Dignum:

Incorporating Social Practices in BDI Agent Systems. 1901-1903 - Michael Crosscombe, Jonathan Lawry:

Evidence Propagation and Consensus Formation in Noisy Environments. 1904-1906 - Zeyuan Cui, Li Pan, Shijun Liu:

Hybrid BiLSTM-Siamese Network for Relation Extraction. 1907-1909 - Christopher Culley, Ji Qi, Carmine Ventre:

How to Get the Most from Goods Donated to Charities. 1910-1912 - Steven Damer, Maria L. Gini, Jeffrey S. Rosenschein:

The Gift Exchange Game: Managing Opponent Actions. 1913-1915 - Sankarshan Damle, Boi Faltings, Sujit Gujar:

A Truthful, Privacy-Preserving, Approximately Efficient Combinatorial Auction For Single-minded Bidders. 1916-1918 - Sankarshan Damle, Moin Hussain Moti

, Praphul Chandra, Sujit Gujar:
Aggregating Citizen Preferences for Public Projects Through Civic Crowdfunding. 1919-1921 - Alper Demir, Erkin Çilden, Faruk Polat:

Landmark Based Reward Shaping in Reinforcement Learning with Hidden States. 1922-1924 - Palash Dey:

Local Distance Restricted Bribery in Voting. 1925-1927 - Carlos Diaz Alvarenga, Nicola Basilico, Stefano Carpin:

Delayed and Time-Variant Patrolling Strategies against Attackers with Local Observation Capabilities. 1928-1930 - Raghuram Bharadwaj Diddigi, Sai Koti Reddy Danda, Prabuchandran K. J., Shalabh Bhatnagar:

Actor-Critic Algorithms for Constrained Multi-agent Reinforcement Learning. 1931-1933 - Tom Eccles, Edward Hughes, János Kramár, Steven Wheelwright, Joel Z. Leibo:

The Imitation Game: Learned Reciprocity in Markov games. 1934-1936 - Alexander Elkholy, Fangkai Yang, Steven Gustafson:

Interpretable Automated Machine Learning in Maana™ Knowledge Platform. 1937-1939 - Tanguy Esteoule, Carole Bernon, Marie-Pierre Gleizes, Morgane Barthod:

Improving Wind Power Forecasting through Cooperation: A Case-Study on Operating Farms. 1940-1942 - Richard Everett, Adam D. Cobb, Andrew Markham, Stephen J. Roberts:

Optimising Worlds to Evaluate and Influence Reinforcement Learning Agents. 1943-1945 - Piotr Faliszewski, Piotr Skowron, Stanislaw Szufa, Nimrod Talmon:

Proportional Representation in Elections: STV vs PAV. 1946-1948 - Matthias Feldotto, Pascal Lenzner, Louise Molitor, Alexander Skopalik:

From Hotelling to Load Balancing: Approximation and the Principle of Minimum Differentiation. 1949-1951 - Diodato Ferraioli, Carmine Ventre:

Obvious Strategyproofness, Bounded Rationality and Approximation. 1952-1954 - Angelo Ferrando, Michael Winikoff, Stephen Cranefield, Frank Dignum, Viviana Mascardi:

On Enactability of Agent Interaction Protocols: Towards a Unified Approach. 1955-1957 - Thayanne França da Silva, José Luis Alves Leite, Raimundo Juracy Campos Ferro Junior, Leonardo Ferreira da Costa, Raphael Pinheiro de Souza, João Pedro Bernardino Andrade, Gustavo Augusto Lima de Campos:

Smart Targets to Avoid Observation in CTO Problem. 1958-1960 - Jeroen Fransman, Joris Sijs, Henry Dol, Erik Theunissen, Bart De Schutter:

Bayesian-DPOP for Continuous Distributed Constraint Optimization Problems. 1961-1963 - Tim French, Andrew Gozzard, Mark Reynolds:

Dynamic Aleatoric Reasoning in Games of Bluffing and Chance. 1964-1966 - Johannes Günther, Alex Kearney, Nadia M. Ady, Michael Rory Dawson, Patrick M. Pilarski:

Meta-learning for Predictive Knowledge Architectures: A Case Study Using TIDBD on a Sensor-rich Robotic Arm. 1967-1969 - Sunil Gandhi, Tim Oates, Tinoosh Mohsenin, Nicholas R. Waytowich:

Learning Behaviors from a Single Video Demonstration Using Human Feedback. 1970-1972 - Francisco M. Garcia, Bruno C. da Silva, Philip S. Thomas:

A Compression-Inspired Framework for Macro Discovery. 1973-1975 - Francisco M. Garcia, Philip S. Thomas:

A Meta-MDP Approach to Exploration for Lifelong Reinforcement Learning. 1976-1978 - Moojan Ghafurian, Neil Budnarain, Jesse Hoey:

Role of Emotions in Perception of Humanness of Virtual Agents. 1979-1981 - Leilani H. Gilpin, Lalana Kagal:

An Adaptable Self-Monitoring Framework for Opaque Machines. 1982-1984 - Piotr J. Gmytrasiewicz, Sarit Adhikari:

Optimal Sequential Planning for Communicative Actions: A Bayesian Approach. 1985-1987 - Arturo Gomez Chavez, Qingwen Xu, Christian A. Mueller, Sören Schwertfeger, Andreas Birk:

Towards Accurate Deep-Sea Localization in Structured Environments based on Perception Quality Cues. 1988-1990 - Shubham Goyal, Nirav Ajmeri, Munindar P. Singh:

Applying Norms and Sanctions to Promote Cybersecurity Hygiene. 1991-1993 - Davide Grossi, Simon Rey:

Credulous Acceptability, Poison Games and Modal Logic. 1994-1996 - Vaibhav Gupta, Daksh Anand, Praveen Paruchuri, Balaraman Ravindran:

Advice Replay Approach for Richer Knowledge Transfer in Teacher Student Framework. 1997-1999 - Michal Habani, Priel Levy, David Sarne:

Contest Manipulation for Improved Performance. 2000-2002 - Chung-Kyun Han, Shih-Fen Cheng, Pradeep Varakantham:

A Homophily-Free Community Detection Framework for Trajectories with Delayed Responses. 2003-2005 - Dongge Han, Wendelin Boehmer, Michael J. Wooldridge, Alex Rogers:

Multi-Agent Hierarchical Reinforcement Learning with Dynamic Termination. 2006-2008 - Mohammad Rashedul Hasan:

Towards a "Master Algorithm" for Forming Faster Conventions On Various Networks. 2009-2011 - Mohammadhosein Hasanbeig, Alessandro Abate, Daniel Kroening:

Logically-Constrained Neural Fitted Q-iteration. 2012-2014 - Mojgan Hashemian, Ana Paiva, Samuel Mascarenhas, Pedro Alexandre Santos, Rui Prada:

Social Power in Human-Robot Interaction: Towards More Persuasive Robots. 2015-2017 - Jesse Heyninck, Ofer Arieli:

Simple Contrapositive Assumption-Based Frameworks. 2018-2020 - Shuyue Hu, Chin-wing Leung, Ho-fung Leung, Jiamou Liu:

To be Big Picture Thinker or Detail-Oriented?: Utilizing Perceived Gist Information to Achieve Efficient Convention Emergence with Bilateralism and Multilateralism. 2021-2023 - Taoan Huang, Bohui Fang, Hoon Oh, Xiaohui Bei

, Fei Fang:
Optimal Trip-Vehicle Dispatch with Multi-Type Requests. 2024-2026 - Ayumi Igarashi, Kazunori Ota, Yuko Sakurai, Makoto Yokoo:

Robustness against Agent Failure in Hedonic Games. 2027-2029 - Craig Innes, Alex Lascarides:

Learning Factored Markov Decision Processes with Unawareness. 2030-2032 - Anisse Ismaili, Kentaro Yahiro, Tomoaki Yamaguchi, Makoto Yokoo:

Student-Project-Resource Matching-Allocation Problems: Two-Sided Matching Meets Resource Allocation. 2033-2035 - Anisse Ismaili, Noam Hazon, Emi Watanabe, Makoto Yokoo, Sarit Kraus:

Complexity and Approximations in Robust Coalition Formation via Max-Min k-Partitioning. 2036-2038 - Mohammad Ali Javidian, Pooyan Jamshidi, Rasoul Ramezanian:

Avoiding Social Disappointment in Elections. 2039-2041 - Nitin Kamra, Umang Gupta, Kai Wang, Fei Fang, Yan Liu, Milind Tambe:

Deep Fictitious Play for Games with Continuous Action Spaces. 2042-2044 - Jan Karwowski, Jacek Mandziuk:

Stackelberg Equilibrium Approximation in General-Sum Extensive-Form Games with Double-Oracle Sampling Method. 2045-2047 - Ryohei Kawata, Katsuhide Fujita:

Meta-Strategy for Multi-Time Negotiation: A Multi-Armed Bandit Approach. 2048-2050 - Batya Kenig:

The Complexity of the Possible Winner Problem with Partitioned Preferences. 2051-2053 - Shauharda Khadka, Connor Yates, Kagan Tumer:

Memory based Multiagent One Shot Learning. 2054-2056 - Zine El Abidine Kherroubi, Samir Aknine, Rebiha Bacha:

Dynamic and Intelligent Control of Autonomous Vehicles for Highway On-ramp Merge. 2057-2059 - Seungchan Kim, Kavosh Asadi, Michael L. Littman, George Dimitri Konidaris:

Removing the Target Network from Deep Q-Networks with the Mellowmax Operator. 2060-2062 - Vincent J. Koeman, Koen V. Hindriks, Jonathan Gratch, Catholijn M. Jonker:

Recognising and Explaining Bidding Strategies in Negotiation Support Systems. 2063-2065 - Christine Konicki, Virginia Vassilevska Williams:

Bribery in Balanced Knockout Tournaments. 2066-2068 - Ngai Meng Kou, Cheng Peng, Xiaowei Yan, Zhiyuan Yang, Heng Liu, Kai Zhou, Haibing Zhao, Lijun Zhu, Yinghui Xu:

Multi-agent Path Planning with Non-constant Velocity Motion. 2069-2071 - Taras Kucherenko, Dai Hasegawa, Naoshi Kaneko, Gustav Eje Henter, Hedvig Kjellström:

On the Importance of Representations for Speech-Driven Gesture Generation. 2072-2074 - Anagha Kulkarni, Yantian Zha, Tathagata Chakraborti, Satya Gautam Vadlamudi, Yu Zhang, Subbarao Kambhampati:

Explicable Planning as Minimizing Distance from Expected Behavior. 2075-2077 - Sumit Kumar, Wenhao Luo, George Kantor, Katia P. Sycara:

Active Learning with Gaussian Processes for High Throughput Phenotyping. 2078-2080 - Isaac Lage, Daphna Lifschitz, Finale Doshi-Velez, Ofra Amir:

Toward Robust Policy Summarization. 2081-2083 - Michael William Lanighan, Roderic A. Grupen:

Long-term Autonomous Mobile Manipulation under Uncertainty. 2084-2086 - Haralambie Leahu, Michael Kaisers, Tim Baarslag:

Preference Learning in Automated Negotiation Using Gaussian Uncertainty Models. 2087-2089 - Donghun Lee, Warren B. Powell:

Meta-learning of Bidding Agent with Knowledge Gradient in a Fully Agent-based Sponsored Search Auction Simulator. 2090-2092 - Priel Levy, David Sarne, Yonatan Aumann:

Selective Information Disclosure in Contests. 2093-2095 - Jialian Li, Tongzheng Ren, Hang Su, Jun Zhu:

Learn a Robust Policy in Adversarial Games via Playing with an Expert Opponent. 2096-2098 - Zelei Liu, Han Yu, Leye Wang, Liang Hu, Qiang Yang:

Social Mobilization to Reposition Indiscriminately Parked Shareable Bikes. 2099-2101 - Matteo Luperto, Danilo Fusi, N. Alberto Borghese, Francesco Amigoni:

Exploiting Inaccurate A Priori Knowledge in Robot Exploration. 2102-2104 - Manao Machida:

Polynomial-Time Multi-Agent Pathfinding with Heterogeneous and Self-Interested Agents. 2105-2107 - Marco Maier, Chadly Marouane, Daniel Elsner:

DeepFlow: Detecting Optimal User Experience From Physiological Data Using Deep Neural Networks. 2108-2110 - Padala Manisha, Sujit Gujar:

Thompson Sampling Based Multi-Armed-Bandit Mechanism Using Neural Networks. 2111-2113 - Stefano Mariani, Angelo Croatti, Alessandro Ricci, Andrea Prati, Giuseppe Vizzari:

ViTALiSE: Virtual to Augmented Loop in Smart Environments. 2114-2116 - Borislav Mavrin, Shangtong Zhang, Hengshuai Yao, Linglong Kong:

Exploration in the Face of Parametric and Intrinsic Uncertainties. 2117-2119 - Reshef Meir:

Strategyproof Facility Location for Three Agents on a Circle. 2120-2122 - Jacob Menashe, Peter Stone:

Escape Room: A Configurable Testbed for Hierarchical Reinforcement Learning. 2123-2125 - John Mern, Dorsa Sadigh, Mykel J. Kochenderfer:

Object Exchangability in Reinforcement Learning. 2126-2128 - Yuki Miyashita, Toshiharu Sugawara:

Coordination Structures Generated by Deep Reinforcement Learning in Distributed Task Executions. 2129-2131 - Akshay Narayan, Tze-Yun Leong:

Effects of Task Similarity on Policy Transfer with Selective Exploration in Reinforcement Learning. 2132-2134 - Setareh Nasihati Gilani, David R. Traum, Rachel Sortino, Grady Gallagher, Kailyn Aaron-Lozano, Cryss Padilla, Ari Shapiro, Jason Lamberton, Laura-Ann Petitto:

Can a Virtual Human Facilitate Language Learning in a Young Baby? 2135-2137 - Aadesh Neupane, Michael A. Goodrich:

Designing Emergent Swarm Behaviors using Behavior Trees and Grammatical Evolution. 2138-2140 - Hoang Nga Nguyen, Abdur Rakib:

Probabilistic Resource-bounded Alternating-time Temporal Logic. 2141-2143 - Arianna Novaro, Umberto Grandi, Dominique Longin, Emiliano Lorini:

Strategic Majoritarian Voting with Propositional Goals. 2144-2146 - Suman Ojha, Jonathan Vitale, Syed Ali Raza, Richard Billingsley, Mary-Anne Williams:

Integrating Personality and Mood with Agent Emotions. 2147-2149 - Keisuke Otaki, Satoshi Koide, Ayano Okoso, Tomoki Nishi:

Cooperative Routing with Heterogeneous Vehicles. 2150-2152 - Aldo Pacchiano, Yoram Bachrach:

Computing Stable Solutions in Threshold Network Flow Games With Bounded Treewidth. 2153-2155 - Simon Pageaud, Véronique Deslandres, Vassilissa Lehoux, Salima Hassas:

Multiagent Learning and Coordination with Clustered Deep Q-Network. 2156-2158 - Theodore J. Perkins:

Optimal Risk in Multiagent Blind Tournaments. 2159-2161 - Thomy Phan, Kyrill Schmid, Lenz Belzner, Thomas Gabor, Sebastian Feld, Claudia Linnhoff-Popien:

Distributed Policy Iteration for Scalable Approximation of Cooperative Multi-Agent Policies. 2162-2164 - Nico Potyka:

A Polynomial-time Fragment of Epistemic Probabilistic Argumentation. 2165-2167 - Aida Rahmattalabi, Phebe Vayanos, Anthony Fulginiti, Milind Tambe:

Robust Peer-Monitoring on Graphs with an Application to Suicide Prevention in Social Networks. 2168-2170 - Sai Koti Reddy Danda, Amrita Saha, Srikanth G. Tamilselvam, Priyanka Agrawal, Pankaj Dayama:

Risk Averse Reinforcement Learning for Mixed Multi-agent Environments. 2171-2173 - Golden Rockefeller, Patrick Mannion, Kagan Tumer:

Curriculum Learning for Tightly Coupled Multiagent Systems. 2174-2176 - Pierre Rust, Gauthier Picard, Fano Ramparany:

Installing Resilience in Distributed Constraint Optimization Operated by Physical Multi-Agent Systems. 2177-2179 - Himangshu Saikia, Fangkai Yang, Christopher Peters:

Priority driven Local Optimization for Crowd Simulation. 2180-2182 - Yakoub Salhi:

Entailment Functions and Reasoning Under Inconsistency. 2183-2185 - Mikayel Samvelyan, Tabish Rashid, Christian Schröder de Witt, Gregory Farquhar, Nantas Nardelli, Tim G. J. Rudner, Chia-Man Hung, Philip H. S. Torr, Jakob N. Foerster, Shimon Whiteson:

The StarCraft Multi-Agent Challenge. 2186-2188 - Hassam Ullah Sheikh, Ladislau Bölöni:

Emergence of Scenario-Appropriate Collaborative Behaviors for Teams of Robotic Bodyguards. 2189-2191 - Shusuke Shigenaka, Shunki Takami, Yoshihiko Ozaki, Masaki Onishi, Tomohisa Yamashita, Itsuki Noda:

Evaluation of Optimization for Pedestrian Route Guidance in Real-world Crowded Scene. 2192-2194 - Maayan Shvo, Jakob Buhmann, Mubbasir Kapadia:

Towards Modeling the Interplay of Personality, Motivation, Emotion, and Mood in Social Agents. 2195-2197 - Nikolaos I. Spanoudakis, Charilaos Akasiadis, Georgios Kechagias, Georgios Chalkiadakis:

An Open MAS Services Architecture for the V2G/G2V Problem. 2198-2200 - Fan-Yun Sun, Yen-Yu Chang, Yueh-Hua Wu, Shou-De Lin:

A Regulation Enforcement Solution for Multi-agent Reinforcement Learning. 2201-2203 - Samarth Swarup, Reza Rezazadegan:

Generating an Agent Taxonomy Using Topological Data Analysis. 2204-2205 - Seiji Takanashi, Makoto Yokoo:

Two-stage N-person Prisoner's Dilemma with Social Preferences. 2206-2208 - Hongyao Tang, Jianye Hao, Li Wang, Zan Wang, Tim Baarslag:

An Optimal Rewiring Strategy for Cooperative Multiagent Social Learning. 2209-2211 - Zoi Terzopoulou, Ulle Endriss:

Rethinking the Neutrality Axiom in Judgment Aggregation. 2212-2214 - Omkar Thakoor, Milind Tambe, Phebe Vayanos, Haifeng Xu, Christopher Kiekintveld:

General-Sum Cyber Deception Games under Partial Attacker Valuation Information. 2215-2217 - Madhura Thosar, Christian A. Mueller, Sebastian Zug, Max Pfingsthorn:

Towards a Prototypical Approach to Tool-Use Improvisation. 2218-2219 - Sutasinee Thovuttikul, Yoshimasa Ohmoto, Toyoaki Nishida:

The Effect of First- and Third-person POVs on Different Cultural Communication: How Japanese People Understand Social Conversation at Thai Night Flea Markets. 2220-2222 - Myrthe L. Tielman, Catholijn M. Jonker, M. Birna van Riemsdijk:

Deriving Norms from Actions, Values and Context. 2223-2225 - Manan Tomar, Akhil Sathuluri, Balaraman Ravindran:

MaMiC: Macro and Micro Curriculum for Robotic Reinforcement Learning. 2226-2228 - Faraz Torabi, Garrett Warnell, Peter Stone:

Adversarial Imitation Learning from State-only Demonstrations. 2229-2231 - Gianluca Torta, Roberto Micalizio, Samuele Sormano:

Explaining Failures Propagations in the Execution of Multi-Agent Temporal Plans. 2232-2234 - Rohith Dwarakanath Vallam, Ramasuri Narayanam, Srikanth G. Tamilselvam, Nicholas Mattei, Sudhanshu S. Singh, Shweta Garg, Gyana R. Parija:

DeepAggregation: A New Approach for Aggregating Incomplete Ranked Lists using Multi-Layer Graph Embedding. 2235-2237 - Colin Vandenhof, Edith Law:

Contradict the Machine: A Hybrid Approach to Identifying Unknown Unknowns. 2238-2240 - Vivek Shankar Varadharajan, Bram Adams, Giovanni Beltrame:

The Unbroken Telephone Game: Keeping Swarms Connected. 2241-2243 - Miguel Vasco, Francisco S. Melo, David Martins de Matos, Ana Paiva, Tetsunari Inamura:

Online Motion Concept Learning: A Novel Algorithm for Sample-Efficient Learning and Recognition of Human Actions. 2244-2246 - Kyle Vedder, Joydeep Biswas:

X*: Anytime Multiagent Planning With Bounded Search. 2247-2249 - Richa Verma, Sarmimala Saikia, Harshad Khadilkar, Puneet Agarwal, Gautam Shroff, Ashwin Srinivasan:

A Reinforcement Learning Framework for Container Selection and Ship Load Sequencing in Ports. 2250-2252 - Jiangxing Wang, Jiaoyang Li, Hang Ma, Sven Koenig, T. K. Satish Kumar:

A New Constraint Satisfaction Perspective on Multi-Agent Path Finding: Preliminary Results. 2253-2255 - Shiheng Wang, Fangzhen Lin:

Invincible Strategies of Iterated Prisoner's Dilemma. 2256-2258 - Wanyuan Wang, Zichen Dong, Bo An, Yichuan Jiang:

Efficient City-Scale Patrolling Using Decomposition and Grafting. 2259-2261 - Kacper Wardega, Roberto Tron, Wenchao Li:

Masquerade Attack Detection Through Observation Planning for Multi-Robot Systems. 2262-2264 - Ermo Wei, Drew Wicke, Sean Luke:

Multiagent Adversarial Inverse Reinforcement Learning. 2265-2266 - Nic Wilson:

Generating Voting Rules from Random Relations. 2267-2269 - Kyle Hollins Wray, Shlomo Zilberstein:

Policy Networks: A Framework for Scalable Integration of Multiple Decision-Making Models. 2270-2272 - Shangyu Xie, Yuan Hong, Peng-Jun Wan:

A Privacy Preserving Multiagent System for Load Balancing in the Smart Grid. 2273-2275 - Ruiyang Xu, Karl J. Lieberherr:

Learning Self-Game-Play Agents for Combinatorial Optimization Problems. 2276-2278 - Bo Yan, Kexiu Song, Jiamou Liu, Fanku Meng, Yiping Liu, Hongyi Su:

On the Maximization of Influence Over an Unknown Social Network. 2279-2281 - Tianpei Yang, Jianye Hao, Zhaopeng Meng, Yan Zheng, Chongjie Zhang, Ze Zheng:

Bayes-ToMoP: A Fast Detection and Best Response Algorithm Towards Sophisticated Opponents. 2282-2284 - Yaodong Yang, Jianye Hao, Yan Zheng, Xiaotian Hao, Bofeng Fu:

Large-Scale Home Energy Management Using Entropy-Based Collective Multiagent Reinforcement Learning Framework. 2285-2287 - Yi Yang, Quan Bai, Qing Liu:

Modeling Random Guessing and Task Difficulty for Truth Inference in Crowdsourcing. 2288-2290 - Yongjie Yang, Jianxin Wang:

Complexity of Additive Committee Selection with Outliers. 2291-2293 - Nutchanon Yongsatianchot, Stacy Marsella:

Modeling Human Decision-Making during Hurricanes: From Model to Data Collection to Prediction. 2294-2296 - Chao Yu, Xin Wang, Zhanbo Feng:

Coordinated Multiagent Reinforcement Learning for Teams of Mobile Sensing Robots. 2297-2299 - Han Yu, Zhiqi Shen, Lizhen Cui, Yongqing Zheng, Victor R. Lesser:

Ethically Aligned Multi-agent Coordination to Enhance Social Welfare. 2300-2302 - Alon Zanbar, Gal A. Kaminka:

Is Agent Software More Complex than Other Software? 2303-2305 - Hedayat Zarkoob, Hu Fu, Kevin Leyton-Brown:

Report-Sensitive Spot-checking in Peer Grading Systems. 2306-2308 - Nicholas Zerbel, Logan Yliniemi:

Multiagent Monte Carlo Tree Search. 2309-2311 - Jianyu Zhang, Jianye Hao, Françoise Fogelman-Soulié, Zan Wang:

Automatic Feature Engineering by Deep Reinforcement Learning. 2312-2314 - Han Zhao, Junjie Hu, Zhenyao Zhu, Adam Coates, Geoffrey J. Gordon:

Deep Generative and Discriminative Domain Adaptation. 2315-2317 - Wei-Ye Zhao, Jian Peng:

Stochastic Variance Reduction for Deep Q-learning. 2318-2320 - Yuhang Zhao, Xiujun Ma:

Learning Efficient Communication in Cooperative Multi-Agent Environment. 2321-2323 - Changxi Zhu, Ho-fung Leung, Shuyue Hu, Yi Cai:

A Q-values Sharing Framework for Multiple Independent Q-learners. 2324-2326
Demonstrations
- João Paulo Aires, Roger Granada, Felipe Meneguzzi:

ConCon: A Contract Conflict Identifier. 2327-2329 - Matteo Baldoni, Cristina Baroglio, Roberto Micalizio, Stefano Tedeschi:

Implementing Business Processes in JaCaMo+ by Exploiting Accountability and Responsibility. 2330-2332 - Roman Barták, Ivan Krasicenko, Jirí Svancara

:
Multi-Agent Path Finding on Real Robots. 2333-2335 - Elizabeth Bondi, Hoon Oh, Haifeng Xu, Fei Fang, Bistra Dilkina, Milind Tambe:

Using Game Theory in Real Time in the Real World: A Conservation Case Study. 2336-2338 - Mason Bretan, Siddharth Sanan, Larry P. Heck:

Learning an Effective Control Policy for a Robotic Drumstick via Self-Supervision. 2339-2341 - Alberto Castellini, Francesco Masillo, Riccardo Sartea, Alessandro Farinelli:

eXplainable Modeling (XM): Data Analysis for Intelligent Agents. 2342-2344 - Martin Chapman, Panagiotis Balatsoukas, Mark Ashworth, Vasa Curcin, Nadin Kökciyan, Kai Essers, Isabel Sassoon, Sanjay Modgil, Simon Parsons, Elizabeth I. Sklar:

Computational Argumentation-based Clinical Decision Support. 2345-2347 - Siqi Chen, Yonghao Cui, Cong Shang, Jianye Hao, Gerhard Weiss:

ONECG: Online Negotiation Environment for Coalitional Games. 2348-2350 - Filipa Correia, Samuel Mascarenhas, Samuel Gomes, Silvia Tulli, Fernando P. Santos, Francisco C. Santos, Rui Prada, Francisco S. Melo, Ana Paiva:

For The Record - A Public Goods Game For Exploring Human-Robot Collaboration. 2351-2353 - Deepeka Garg, Maria Chli, George Vogiatzis:

Traffic3D: A New Traffic Simulation Paradigm. 2354-2356 - Manuel Guimarães, Samuel Mascarenhas, Rui Prada, Pedro Alexandre Santos, João Dias:

An Accessible Toolkit for the Creation of Socio-EmotionalAgents. 2357-2359 - Seyed Ali Hosseini, Diarmid Campbell, Marco Favorito, Jonathan Ward:

Peer-to-Peer Negotiation for Optimising Journeys of Electric Vehicles on a Tour of Europe. 2360-2362 - Martin Jedwabny, Pierre Bisquert, Madalina Croitoru:

PAPOW: Papow Aggregates Preferences and Orderings to select Winners. 2363-2365 - Amin Karamlou, Kristijonas Cyras, Francesca Toni:

Deciding the Winner of a Debate Using Bipolar Argumentation. 2366-2368 - Muralidhar Konda, Pradeep Varakantham, Aayush Saxena, Meghna Lowalekar:

RE-ORG: An Online Repositioning Guidance Agent. 2369-2371 - Damian Kurpiewski

, Wojciech Jamroga, Michal Knapik
:
STV: Model Checking for Strategies under Imperfect Information. 2372-2374 - Tiago Pinto, Zita A. Vale:

ALBidS: A Decision Support System for Strategic Bidding in Electricity Markets. 2375-2377 - Tiago Pinto, Gabriel Santos, Zita A. Vale:

Practical Application of a Multi-Agent Systems Society for Energy Management and Control. 2378-2380 - Luke Riley, Grammateia Kotsialou, Amrita Dhillon, Toktam Mahmoodi, Peter McBurney, Richard Pearce:

Deploying a Shareholder Rights Management System onto a Distributed Ledger. 2381-2383 - Francisco Silva, Tiago Pinto, Zita A. Vale:

Decision Support System for Opponents Selection in Electricity Markets Bilateral Negotiations. 2384-2386 - David St-Onge, Vivek Shankar Varadharajan, Giovanni Beltrame:

Tangible Robotic Fleet Control. 2387-2389 - Bruno Yun

, Madalina Croitoru, Srdjan Vesic, Pierre Bisquert:
NAKED: N-Ary Graphs from Knowledge Bases Expressed in Datalog±. 2390-2392
Doctoral Consortium
- Mohammad Mehdi Afsar:

Intelligent Multi-Purpose Healthcare Bot Facilitating Shared Decision Making. 2393-2395 - Mattias Appelgren:

Teaching Agents Through Correction. 2396-2398 - Nikhil Bhargava:

Multi-Agent Coordination under Uncertain Communication. 2399-2401 - Elizabeth Bondi:

Bridging the Gap Between High-Level Reasoning in Strategic Agent Coordination and Low-Level Agent Development. 2402-2404 - Yunshu Du:

Improving Deep Reinforcement Learning via Transfer. 2405-2407 - Mojgan Hashemian:

Persuasive Social Robots using Social Power Dynamics. 2408-2410 - Khoi D. Hoang:

Proactive Distributed Constraint Optimization Problems. 2411-2413 - Tobias Hogrebe:

Complexity of Distances in Elections: Doctoral Consortium. 2414-2416 - Ridi Hossain:

Sharing is Caring: Dynamic Mechanism for Shared Resource Ownership. 2417-2419 - Patrik Jonell:

Using Social and Physiological Signals for User Adaptation in Conversational Agents. 2420-2422 - Timotheus Kampik:

Empathic Agents: A Hybrid Normative/Consequentialistic Approach. 2423-2425 - Vera A. Kazakova:

Adaptable Decentralized Task Allocation of Swarm Agents. 2426-2428 - Bo Li:

Mechanism Design with Unstructured Beliefs. 2429-2431 - Prashan Madumal:

Explainable Agency in Intelligent Agents: Doctoral Consortium. 2432-2434 - Louise Molitor:

Strategic Location and Network Formation Games. 2435-2437 - Andreea-Oana Petac:

Conversational Narrative Interfaces for Sensemaking. 2438-2440 - Jacob Schlueter:

Novel Hedonic Games and Lottery Systems. 2441-2443 - Garima Shakya:

Problems in Computational Mechanism Design. 2444-2446 - Felipe Leno da Silva:

Integrating Agent Advice and Previous Task Solutions in Multiagent Reinforcement Learning. 2447-2448 - Martin Strobel:

Aspects of Transparency in Machine Learning. 2449-2451 - Xintong Wang:

Studies on the Computational Modeling and Design of Financial Markets. 2452-2454 - Su Zhang:

Enhanced Learning from Multiple Demonstrations with a Flexible Two-level Structure Approach. 2455-2457

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID














