default search action
CIG 2013: Niagara Falls, ON, Canada
- 2013 IEEE Conference on Computational Inteligence in Games (CIG), Niagara Falls, ON, Canada, August 11-13, 2013. IEEE 2013, ISBN 978-1-4673-5308-3
- Daniel A. Ashlock, Jeremy Gilbert:
Creativity and competitiveness in polyomino-developing game playing agents. 1-8 - Rafet Sifa, Anders Drachen, Christian Bauckhage, Christian Thurau, Alessandro Canossa:
Behavior evolution in Tomb Raider Underworld. 1-8 - Mike Preuss, Daniel Kozakowski, Johan Hagelbäck, Heike Trautmann:
Reactive strategy choice in StarCraft by means of Fuzzy Control. 1-8 - Héctor Adrián Díaz Furlong, Ana Luisa Solís González Cosío:
An approach to level design using procedural content generation and difficulty curves. 1-8 - José María Peña Sánchez, Ernestina Menasalvas Ruiz, Santiago Muelas, Antonio LaTorre, Luís Peña, Sascha Ossowski:
Soft computing for content generation: Trading market in a basketball management video game. 1-8 - Nicholas Bowen, Jonathan Todd, Gita Sukthankar:
Adjutant bot: An evaluation of unit micromanagement tactics. 1-8 - Mihai Polceanu:
MirrorBot: Using human-inspired mirroring behavior to pass a turing test. 1-8 - Kyriakos Efthymiadis, Daniel Kudenko:
Using plan-based reward shaping to learn strategies in StarCraft: Broodwar. 1-8 - Mark O. Riedl, Alexander Zook:
AI for game production. 1-8 - Kien Quang Nguyen, Zhe Wang, Ruck Thawonmas:
Potential flows for controlling scout units in StarCraft. 1-7 - Kenneth W. Regan, Tamal Biswas:
Psychometric modeling of decision making via game play. 1-8 - Kevin Norris, Ian D. Watson:
A statistical exploitation module for Texas Hold'em: And it's benefits when used with an approximate nash equilibrium strategy. 1-8 - David Churchill, Michael Buro:
Portfolio greedy search and simulation for large-scale combat in starcraft. 1-8 - Joseph Alexander Brown:
Examination of graphs in Multiple Agent Genetic Networks for Iterated Prisoner's Dilemma. 1-8 - Diego Perez Liebana, Spyridon Samothrakis, Simon M. Lucas:
Online and offline learning in multi-objective Monte Carlo Tree Search. 1-8 - Anders Drachen, Matthias Schubert:
Spatial game analytics and visualization. 1-8 - Atif M. Alhejali, Simon M. Lucas:
Using genetic programming to evolve heuristics for a Monte Carlo Tree Search Ms Pac-Man agent. 1-8 - Mark Grimes, Moshe Dror:
Observations on strategies for Goofspiel. 1-2 - Edward Jack Powley, Daniel Whitehouse, Peter I. Cowling:
Bandits all the way down: UCB1 as a simulation policy in Monte Carlo Tree Search. 1-8 - David L. Buckley, Ke Chen, Joshua D. Knowles:
Predicting skill from gameplay input to a first-person shooter. 1-8 - Antonios Liapis, Héctor Pérez Martínez, Julian Togelius, Georgios N. Yannakakis:
Adaptive game level creation through rank-based interactive evolution. 1-8 - Cameron Browne:
UCT for PCG. 1-8 - Josep Valls-Vargas, Santiago Ontañón, Jichen Zhu:
Towards story-based content generation: From plot-points to maps. 1-8 - Ho-Chul Cho, Kyung-Joong Kim:
Comparison of human and AI bots in StarCraft with replay data mining. 1-2 - Julian Bishop, Risto Miikkulainen:
Evolutionary reature evaluation for online Reinforcement Learning. 1-8 - Matthias Kuchem, Mike Preuss, Günter Rudolph:
Multi-objective assessment of pre-optimized build orders exemplified for StarCraft 2. 1-8 - Brent E. Harrison, David L. Roberts:
Analytics-driven dynamic game adaption for player retention in Scrabble. 1-8 - Giuseppe Maggiore, Carlos Santos, Dino Dini, Frank Peters, Hans Bouwknegt, Pieter Spronck:
LGOAP: Adaptive layered planning for real-time videogames. 1-8 - Noor Shaker, Julian Togelius, Georgios N. Yannakakis, Likith Poovanna, Vinay Sudha Ethiraj, Stefan J. Johansson, Robert G. Reynolds, Leonard Kinnaird-Heether, Tom Schumann, Marcus Gallagher:
The turing test track of the 2012 Mario AI Championship: Entries and evaluation. 1-8 - Mohammad Shaker, Mhd Hasan Sarhan, Ola Al Naameh, Noor Shaker, Julian Togelius:
Automatic generation and analysis of physics-based puzzle games. 1-8 - Kokolo Ikeda, Simon Viennot:
Production of various strategies and position control for Monte-Carlo Go - Entertaining human players. 1-8 - Shoshannah Tekofsky, Pieter Spronck, Aske Plaat, H. Jaap van den Herik, Jan M. Broersen:
Play style: Showing your age. 1-8 - Hashem Alayed, Fotos Frangoudes, Clifford Neuman:
Behavioral-based cheating detection in online first person shooters using machine learning techniques. 1-8 - Colin Divilly, Colm O'Riordan, Seamus Hill:
Exploration and analysis of the evolution of strategies for Mancala variants. 1-7 - Chong-U Lim, D. Fox Harrell:
Modeling player preferences in avatar customization using social network data: A case-study using virtual items in Team Fortress 2. 1-8 - Hyun-Soo Park, Kyung-Joong Kim:
Opponent modeling with incremental active learning: A case study of Iterative Prisoner's Dilemma. 1-2 - Alberto Uriarte, Santiago Ontañón:
PSMAGE: Balanced map generation for StarCraft. 1-8 - Alessandro Canossa, Josep B. Martinez, Julian Togelius:
Give me a reason to dig Minecraft and psychology of motivation. 1-8 - Timothy Furtak, Michael Buro:
Recursive Monte Carlo search for imperfect information games. 1-8 - Pu Yang, David L. Roberts:
Knowledge discovery for characterizing team success or failure in (A)RTS games. 1-8 - Eric Thibodeau-Laufer, Raul Chandias Ferrari, Li Yao, Olivier Delalleau, Yoshua Bengio:
Stacked calibration of off-policy policy evaluation for video game matchmaking. 1-8 - Jeffrey Tsang:
The structure of a 3-state finite transducer representation for Prisoner's Dilemma. 1-7 - Samuel A. Roberts, Simon M. Lucas:
Measuring interestingness of continuous game problems. 1-8 - Stephen Wiens, Jörg Denzinger, Sanjeev Paskaradevan:
Creating large numbers of game AIs by learning behavior for cooperating units. 1-8 - Philip Hingston, Clare Bates Congdon, Graham Kendall:
Mobile games with intelligence: A killer application? 1-7 - Rafet Sifa, Christian Bauckhage:
Archetypical motion: Supervised game behavior learning with Archetypal Analysis. 1-8 - Joseph Alexander Brown:
Evolved weapons for RPG drop systems. 1-2 - Rahul Dey, Chris Child:
QL-BT: Enhancing behaviour tree design and implementation with Q-learning. 1-8 - Ho-Chul Cho, Kyung-Joong Kim, Sung-Bae Cho:
Replay-based strategy prediction and build order adaptation for StarCraft AI bots. 1-7 - Lee-Ann Barlow, Daniel A. Ashlock:
The impact of connection topology and agent size on cooperation in the iterated prisoner's dilemma. 1-8 - Amit Benbassat, Moshe Sipper:
EvoMCTS: Enhancing MCTS-based players through genetic programming. 1-8 - Samuel Maycock, Tommy Thompson:
Enhancing touch-driven navigation using informed search in Ms. Pac-Man. 1-2 - Tom Schaul:
A video game description language for model-based or interactive learning. 1-8 - Daniel A. Ashlock, Cameron McGuinness:
Landscape automata for search based procedural content generation. 1-8 - Christopher A. Ballinger, Sushil J. Louis:
Finding robust strategies to defeat specific opponents using case-injected coevolution. 1-8 - Garrison W. Greenwood:
A tag-mediated game designed to study cooperation in human populations. 1-7 - Hendrik Baier, Mark H. M. Winands:
Monte-Carlo Tree Search and minimax hybrids. 1-8 - Cameron Browne:
Deductive search for logic puzzles. 1-8 - Edward Jack Powley, Daniel Whitehouse, Peter I. Cowling:
Monte Carlo Tree Search with macro-actions and heuristic route planning for the Multiobjective Physical Travelling Salesman Problem. 1-8 - Siming Liu, Sushil J. Louis, Monica N. Nicolescu:
Using CIGAR for finding effective group behaviors in RTS game. 1-8
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.