


default search action
CIG 2015: Tainan, Taiwan
- 2015 IEEE Conference on Computational Intelligence and Games, CIG 2015, Tainan, Taiwan, August 31 - September 2, 2015. IEEE 2015, ISBN 978-1-4799-8622-4

- Xin Yao:

Keynote speech I: Co-evolutionary learning in game-playing. 16 - Simon Lucas:

Keynote speech II: General video game AI: Challenges and applications. 17 - Martin Müller:

Keynote speech III: Computer go research - The challenges ahead. 18 - Graham Kendall:

Keynote speech IV: Where games meet hyper-heuristics. 19 - Diego Perez:

Tutorial I: Video Game Description Language (VGDL) and the challenge of creating agents for General Video Game Playing (GVGP). 20 - Daniel A. Ashlock:

Tutorial II: Representations for evolutionary computation in games. 21 - Risto Miikkulainen:

Tutorial III: Evolving neural networks. 22 - Yun-Ching Liu, Yoshimasa Tsuruoka

:
Regulation of exploration for simple regret minimization in Monte-Carlo tree search. 35-42 - Takahisa Imagawa, Tomoyuki Kaneko:

Enhancements in Monte Carlo tree search algorithms for biased game trees. 43-50 - Naoyuki Sato, Kokolo Ikeda, Takayuki Wada:

Estimation of player's preference for cooperative RPGs using multi-strategy Monte-Carlo method. 51-59 - Mihai Sorin Dobre, Alex Lascarides:

Online learning and mining human play in complex games. 60-67 - Edward Booth, John Thangarajah

, Fabio Zambetta
:
Flexible story generation with norms and preferences in computer role playing games. 68-74 - David B. Carvalho, Esteban Walter Gonzalez Clua

, Aline Paes
:
Perception simulation in social planning for emergent storytelling. 75-82 - Fabio A. Guilherme da Silva, Bruno Feijó

:
Personality traits in plots with nondeterministic planning for interactive storytelling. 83-90 - Simon Chauvin, Guillaume Levieux

, Jean-Yves Donnart, Stéphane Natkin:
Making sense of emergent narratives: An architecture supporting player-triggered narrative processes. 91-98 - Willian M. P. Reis, Levi H. S. Lelis, Ya'akov (Kobi) Gal

:
Human computation for procedural content generation in platform games. 99-106 - Frederik Frydenberg, Kasper R. Andersen, Sebastian Risi

, Julian Togelius
:
Investigating MCTS modifications in general video game playing. 107-113 - Peter I. Cowling

, Daniel Whitehouse, Edward Jack Powley:
Emergent bluffing and inference with Monte Carlo Tree Search. 114-121 - Nick Sephton, Peter I. Cowling

, Nicholas H. Slaven:
An experimental study of action selection mechanisms to create an entertaining opponent. 122-129 - David Stammer, Tobias Günther, Mike Preuss:

Player-adaptive Spelunky level generation. 130-137 - Caio Freitas de Oliveira, Charles Andryê Galvão Madeira:

Creating efficient walls using potential fields in real-time strategy games. 138-145 - Haiyan Yin

, Linbo Luo, Wentong Cai
, Yew-Soon Ong
, Jinghui Zhong:
A data-driven approach for online adaptation of game difficulty. 146-153 - Jayden Ivanovo, William L. Raffe

, Fabio Zambetta
, Xiaodong Li
:
Combining Monte Carlo tree search and apprenticeship learning for capture the flag. 154-161 - Josef Moudrík, Petr Baudis, Roman Neruda

:
Evaluating Go game records for prediction of player attributes. 162-168 - Michael Dann

, Fabio Zambetta
, John Thangarajah
:
An improved approach to reinforcement learning in Computer Go. 169-176 - Hirotaka Kameko

, Shinsuke Mori, Yoshimasa Tsuruoka
:
Learning a game commentary generator with grounded move expressions. 177-184 - Thorbjørn S. Nielsen, Gabriella A. B. Barros, Julian Togelius

, Mark J. Nelson
:
Towards generating arcade game rules with VGDL. 185-192 - Amin Babadi, Behnaz Omoomi, Graham Kendall:

EnHiC: An enforced hill climbing based system for general game playing. 193-199 - Spyridon Samothrakis, Diego Perez Liebana

, Simon M. Lucas
, Maria Fasli:
Neuroevolution for General Video Game Playing. 200-207 - Thomas Philip Runarsson

, Simon M. Lucas
:
On imitating Connect-4 game trajectories using an approximate n-tuple evaluation function. 208-213 - Tetsuyuki Takahama, Setsuko Sakai:

Emerging collective intelligence in Othello players evolved by differential evolution. 214-221 - Xi Liang, Tinghan Wei, I-Chen Wu

:
Job-level UCT search for solving Hex. 222-229 - Hanting Xie, Sam Devlin

, Daniel Kudenko, Peter I. Cowling
:
Predicting player disengagement and first purchase with event-frequency based data representation. 230-237 - Chong-U Lim, D. Fox Harrell:

Understanding players' identities and behavioral archetypes from avatar customization data. 238-245 - Dominic Kao, D. Fox Harrell:

Toward avatar models to enhance performance and engagement in educational games. 246-253 - Jr-Chang Chen, Gang-Yu Fan, Shih-Yu Tsai, Ting-Yu Lin, Tsan-sheng Hsu:

Compressing Chinese dark chess endgame databases. 254-259 - Hung-Jui Chang, Chih-Wen Hsueh, Tsan-sheng Hsu:

Convergence and correctness analysis of Monte-Carlo tree search algorithms: A case study of 2 by 4 Chinese dark chess. 260-266 - Jiao Wang, Tan Zhu, Hongye Li

, Chu-Hsuan Hsueh, I-Chen Wu
:
Belief-state Monte-Carlo tree search for Phantom games. 267-274 - Naoki Mizukami, Yoshimasa Tsuruoka

:
Building a computer Mahjong player based on Monte Carlo simulation and opponent models. 275-283 - Pablo García-Sánchez

, Alberto Paolo Tonda, Antonio Miguel Mora, Giovanni Squillero
, Juan Julián Merelo Guervós
:
Towards automatic StarCraft strategy generation using genetic programming. 284-291 - Igor V. Karpov, Leif M. Johnson, Risto Miikkulainen:

Evaluating team behaviors constructed with human-guided machine learning. 292-298 - Baozhu Jia, Marc Ebner

:
A strongly typed GP-based video game player. 299-305 - Daniel A. Ashlock:

Evolvable fashion-based cellular automata for generating cavern systems. 306-313 - Yimeng Zhuang, Shuqin Li, Tom Vincent Peters, Chenguang Zhang:

Improving Monte-Carlo tree search for dots-and-boxes with a novel board representation and artificial neural networks. 314-321 - Spencer Polk, B. John Oommen

:
Space and depth-related enhancements of the history-ADS strategy in game playing. 322-327 - Henrique Castro Neto, Rita Maria da Silva Julia:

ACE-RL-Checkers: Improving automatic case elicitation through knowledge obtained by reinforcement learning in player agents. 328-335 - Steffen Kampmann, Sven Seele, Rainer Herpers

, Peter Becker, Christian Bauckhage
:
Automatic mapping of human behavior data to personality model parameters for traffic simulations in virtual environments. 336-343 - Frank G. Glavin

, Michael G. Madden
:
Learning to shoot in first person shooter games by stabilizing actions and clustering rewards for reinforcement learning. 344-351 - Giorgia Baroffio, Luca Galli, Piero Fraternali:

Designing bots in games with a purpose. 352-359 - Pujana Paliyawan, Kingkarn Sookhanaphibarn, Worawat Choensawat, Ruck Thawonmas:

Body motion design and analysis for fighting game interface. 360-367 - Lee-Ann Barlow, Jeffrey Tsang:

Play profiles: The effect of infinite-length games on evolution in the iterated Prisoner's Dilemma. 368-375 - Marie-Liesse Cauwet, Olivier Teytaud, Tristan Cazenave, Abdallah Saffidine, Hua-Min Liang, Shi-Jim Yen, Hung-Hsuan Lin, I-Chen Wu

:
Depth, balancing, and limits of the Elo model. 376-382 - Garrison W. Greenwood:

Evolving strategies to help resolve tragedy of the commons social dilemmas. 383-390 - Daniel Wehr, Jörg Denzinger

:
Mining game logs to create a playbook for unit AIs. 391-398 - Pedro Sequeira, Francisco S. Melo

, Ana Paiva:
"Let's save resources!": A dynamic, collaborative AI for a multiplayer environmental awareness game. 399-406 - Tsubasa Fujiki, Kokolo Ikeda, Simon Viennot:

A platform for turn-based strategy games, with a comparison of Monte-Carlo algorithms. 407-414 - Marc J. van Kreveld

, Maarten Löffler, Paul Mutser:
Automated puzzle difficulty estimation. 415-422 - Joao Quiterio, Rui Prada

, Francisco S. Melo
:
A reinforcement learning approach for the circle agent of geometry friends. 423-430 - Rui Prada

, Phil Lopes
, Joao Catarino, Joao Quiterio, Francisco S. Melo
:
The geometry friends game AI competition. 431-438 - Han-Hsien Huang, Tsaipei Wang:

Learning overtaking and blocking skills in simulated car racing. 439-445 - Jilin Huang, Ivan Tanev, Katsunori Shimohara:

Evolving a general electronic stability program for car simulated in TORCS. 446-453 - Jan Quadflieg, Günter Rudolph, Mike Preuss:

How costly is a good compromise: Multi-objective TORCS controller parameter optimization. 454-460 - Erella Eisenstadt, Amiram Moshaiov, Gideon Avigad:

Co-evolution of strategies for multi-objective games under postponed objective preferences. 461-468 - Takeshi Ito, Yuuma Kitasei:

Proposal and implementation of "digital curling". 469-473 - Masahito Yamamoto, Shu Kato, Hiroyuki Iizuka:

Digital curling strategy based on game tree search. 474-480 - Fumito Masui, Hiroki Ueno, Hitoshi Yanagi, Michal Ptaszynski:

Toward curling informatics - Digital scorebook development and game information analysis. 481-488 - Takashi Kawamura, Ryosuke Kamimura, Satoshi Suzuki, Kojiro Iizuka:

A study on the curling robot will match with human result of one end game with one human. 489-495 - Anna Lisa Martin-Niedecken

, René Bauer, Ralf Mauerhofer, Ulrich Götz:
"RehabConnex": A middleware for the flexible connection of multimodal game applications with input devices used in movement therapy and physical exercising. 496-502 - William L. Raffe

, Marco Tamassia, Fabio Zambetta
, Xiaodong Li
, Florian 'Floyd' Mueller
:
Enhancing theme park experiences through adaptive cyber-physical play. 503-510 - Ivan Zelinka

, Lubomir Sikora:
StarCraft: Brood War - Strategy powered by the SOMA swarm algorithm. 511-516 - Kazuki Asayama, Koichi Moriyama, Ken-ichi Fukui, Masayuki Numao:

Prediction as faster perception in a real-time fighting video game. 517-522 - Chun Yin Chu, Hisaaki Hashizume, Zikun Guo, Tomohiro Harada

, Ruck Thawonmas:
Combining pathfmding algorithm with Knowledge-based Monte-Carlo tree search in general video game playing. 523-529 - Aliona Kozlova, Joseph Alexander Brown

, Elizabeth Reading:
Examination of representational expression in maze generation algorithms. 532-533 - Hyun-Soo Park, Kyung-Joong Kim

:
MCTS with influence map for general video game playing. 534-535 - In-Seok Oh, Kyung-Joong Kim

:
Testing reliability of replay-based imitation for StarCraft. 536-537 - Cheong-mok Bae, Eun Kwang Kim, Jongchan Lee, Kyung-Joong Kim

, Joong Chae Na:
Generation of an arbitrary shaped large maze by assembling mazes. 538-539 - Joseph Alexander Brown

:
Towards better personas in gaming : Contract based expert systems. 540-541 - Li-Wei Ko, Peng-Wen Lai, Bao-Jun Yang, Chin-Teng Lin

:
Mobile EEG & ECG integration system for monitoring physiological states in peforming simulated war game training. 542-543 - Du-Mim Yoon, Joo-Seon Lee, Hyun-Su Seon, Jeong-Hyeon Kim, Kyung-Joong Kim

:
Optimization of Angry Birds AI controllers with distributed computing. 544-545 - Joseph Alexander Brown

, Qiang Qu:
Systems for player reputation with NPC agents. 546-547 - Sehar Shahzad Farooq

, Jong-Woong Baek, Kyung-Joong Kim
:
Interpreting behaviors of mobile game players from in-game data and context logs. 548-549

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID














