[Best viewed with any browser] [Free Speech Online]
Last modified: 18 November 1997

Computer Chess Publications

This section contains references to publications related to computer chess, many of which are available as HTML or (compressed) PostScript files on the web.

Some sites with papers, reports, bibliographies:

Table of contents


General

GEN92a
Jonathan Schaeffer, Joseph Culberson, Norman Treloar, Brent Knight, Paul Lu, Duane Szafron
A World Championship Caliber Checkers Program
Artificial Intelligence, Volume 53, Number 2-3, pages 273-290, 1992.

Abstract

The checkers program Chinook has won the right to play a 40-game match for the World Checkers Championship against Dr. Marion Tinsley. This was earned by placing second, after Dr. Tinsley, at the 1990 U.S. National Open, the biennial event used to determine a challenger for the Championship. This is the first time a program has earned the right to contest for a human World Championship. In an exhibition match played in December 1990, Tinsley narrowly defeated Chinook 7.5 - 6.5. This paper describes the program, the research problems encountered and our solutions. Many of the techniques used for computer chess are directly applicable to computer checkers. However, the problems of building a world championship caliber program force us to address some issues that have, to date, been largely ignored by the computer chess community.

GEN92b
Herbert A. Simon and Jonathan Schaeffer.
The Game of Chess
Handbook of Game Theory with Economic Applications, vol. 1, Robert J. Aumann and Sergiu Hart (editors), Elsevier Science Publishers, Netherlands, pp. 1-17, 1992. Also available as CMU technical report AIP-105. (Compressed postscript, 30 kbytes)
GEN93
Jonathan Schaeffer, Paul Lu, Duane Szafron and Robert Lake
A Re-examination of Brute-Force Search
Games: Planning and Learning, Chapel Hill, N.C., pp. 51-58, 1993. Available as AAAI Report FS9302. (Compressed postscript, 57 kbytes)

Abstract

In August 1992, the World Checkers Champion, Dr. Marion Tinsley, defended his title against the computer program Chinook. The best-of-40-game match was won by Tinsley with 4 wins to the program's 2. This was the first time in history that a program played for a human World Championship. Chinook, with its deep search and endgame databases, has established itself as a Grandmaster checker player. However, the match demonstrated that current brute-force game-playing techniques alone will be insufficient to defeat human champions in games as complex as checkers. This paper re-examines brute-force search and uses anecdotal evidence to argue that there comes a point where additional search is not cost effective. This limit, which we believe we are close to in checkers, becomes an obstacle to further progress. The problems of deep brute-force search described in this paper must be addressed before computers will be dominant in games such as checkers and chess.

GEN94a
Robert Lake, Jonathan Schaeffer and Paul Lu
Solving Large Retrograde Analysis Problems Using a Network of Workstations
Advances in Computer Chess VII, H.J. van den Herik, I.S. Herschberg and J.W.H.M. Uiterwijk (editors), University of Limburg, Maastricht, Netherlands, pp. 135-162, 1994
Compressed postscript (74 kbytes)
GEN94b
Marc-Francois Baudot, Jean-Christophe Weill
Comment les ordinateurs jouent aux echecs
20 juin 1994
GEN95a
Jean-Christophe Weill
Programmes d'echecs de championnat: architecture logicielle, synthese de fontions d'evaluation, parallelisme de recherche
These de Doctorat de l'Universite Paris, 27 JaNVIER 1995

Abstract

Nous presentons nos travaux dans le domaine de la programmation des echecs. Ces travaux ont abouti a :

En complement de ces etudes, nous decrivons l'etat actuel des techniques dans la programmation des jeux de reflexion de competition, dans le domaine des algorithmes de recherche, des heuristiques applicables a ces algorithmes, dans la parallelisation des algorithmes minimax, dans les schemas evaluatifs et dans l'apprentissage.

La programmation des jeux de reflexion fut consideree comme the drosophilia melanogaster of machine intelligence. Ce domaine devait permettre l'elaboration de techniques et d'algorithmes reutilisables dans d'autres domaines de l'intelligence artificielle. Selon C. Shannon, il s'agit d'un sujet sensible ou l'avancee est facilement communicable au public. Nous avons aborde cette question dans le cadre de programmes de jeux devant repondre a un probleme, c'est-a-dire proposer un coup, dans un temps limite, c'est-a-dire des conditions de tournois.

Nous donnons l'essentiel de l'etat des techniques dans la programmation des jeux de reflexion en montrant les contributions que nous avons apportees au domaine. Nous comparons les differentes recherches minimax basees sur des elagages alpha-beta avec l'algorithme NegaC* que nous avons defini, apres avoir donne les principaux resultats que nous avons etablis sur la complexite de cet algorithme. Nous donnons une definition, dans le paradigme negamax, du nouvel algorithme de << recherche de nombre de preuves >> que nous comparons avec notre programme d'echecs ecume
dans le cadre de la recherche de mats.

Nous presentons un ensemble d'heuristiques qui permettent de rendre les recherches negamax plus rapides et plus fiables quant au choix du bon mouvement, en explicitant les options que nous avons prises dans nos programmes d'echecs.

Apres avoir enumere l'ensemble des resultats connus jusqu'alors dans le domaine de la parallelisation de l'algorithme minimax, nous presentons les resultats de nos implantations sur une machine distribuee, la Connection Machine 5. Ces resultats nous ont permis de definir une nouvelle methode de parallelisation que nous avons comparee aux meilleures methodes connues jusqu'alors, sur des arbres simules, sur un programme d'Othello de competition et sur un programme d'echecs. Nous mettons en evidence les qualites de chaque methode.

Nous continuons par la presentation de la methode de construction de fonctions d'evaluation que nous avons choisie, en expliquant comment nous avons pu introduire la notion de plan strategique dans nos programmes d'echecs. Nous montrons aussi comment un programme peut construire automatiquement une fonction d'evaluation par apprentissage et les conclusions de nos experiences dans la construction d'une fonction d'evaluation correcte pour la fin de partie roi et dame contre roi et dame.

Enfin, nous decrivons l'ensemble des caracteristiques de nos programmes d'echecs, et nous mettons en evidence, a l'aide de l'analyse de leurs parties en championnat, leurs forces et leurs faiblesses.

GEN97
Ernst A. Heinz.
How DarkThought Plays Chess
Preprint of an article in: ICCA Journal, Vol. 20, No. 3, Sept. 1997 (HTML)

Conspiracy Numbers

CN90
Norbert Klingbeil and Jonathan Schaeffer.
Empirical Results with Conspiracy Numbers
Computational Intelligence, vol. 6, pp. 1-11, 1990. (Compressed postscript, 57 kbytes)

Abstract

McAllester's Conspiracy Numbers algorithm is an exciting new approach to minimax search that builds trees to variable depth without application-dependent knowledge. The likelihood of the root taking on a value is expressed by its conspiracy number: the minimum number of leaf nodes that must change their value to cause the root to change to that value. This paper describes experiences with the algorithm, using both random and application generated trees. Experiments suggest that more emphasis on breadth, rather than depth, can lead to significant performance improvements. New enhancements to the algorithm are capable of solving 41% more problems than McAllester s original proposal.

Compressed postscript (57 kbytes)

CN93
David McAllester and Denize Yuret
Alpha-Beta-Conspiracy Search
Paper in progress

Abstract

We introduce a variant of alpha-beta search in which each node is associated with two depths rather than one. The purpose of alpha-beta search is to find strategies for each player that together establish a value for the root position. A max strategy establishes a lower bound and the min strategy establishes an upper bound. It has long been observed that forced moves should be searched more deeply. Here we make the observation that in the max strategy we are only concerned with the forcedness of max moves and in the min strategy we are only concerned with the forcedness of min moves. This leads to two measures of depth --- one for each strategy --- and to a two depth variant of alpha-beta called ABC search. The two depth approach can be formally derived from conspiracy theory and the structure of the ABC procedure is justified by two theorems relating ABC search and conspiracy numbers.

CN94
Lisa Lister and Jonathan Schaeffer
An Analysis of the Conspiracy Numbers Algorithm
Computers and Mathematics with Applications, vol. 27, no. 1, pp. 41-64, 1994 (Compressed postscript, 102 kbytes)

Abstract

McAllester's Conspiracy Numbers algorithm is a minimax search procedure that builds game trees to variable depths without application-dependent knowledge. The algorithm gathers information to determine how likely it is that the search of a sub-tree will produce a useful result. "Likeliness" is measured by the conspiracy numbers, the minimum number of leaf nodes that must change their value (by being searched deeper) to cause the minimax value of a sub-tree to change. The search is controlled by the conspiracy threshold (CT), the minimum number of conspiracy numbers beyond which it is considered unlikely that a sub-tree s value can be changed.

This paper analyzes the best-case performance for the algorithm. McAllester's original algorithm is shown to build a tree of size O(w^CT/(w - 1)), where w is the branching factor of the tree. A new improvement to the algorithm is shown to reduce this complexity to O(CT^2). Hence for a given fixed w, the algorithm's best case can be improved from exponential to quadratic growth. Although the minimal tree case is not necessarily representative of the search trees built in practice, experimental results are presented that demonstrate the improvement appears to be significant in the expected case as well.


Heuristics

HEUR89
Jonathan Schaeffer.
The History Heuristic and the Performance of Alpha-Beta Enhancements,
IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 11, no. 11, pp. 1203-1212, 1989. (Compressed postscript, 53 kbytes)

Abstract

Many enhancements to the alpha-beta algorithm have been proposed to help reduce the size of minimax trees. A recent enhancement, the history heuristic, is described that improves the order in which branches are considered at interior nodes. A comprehensive set of experiments is reported which tries all combinations of enhancements to determine which one yields the best performance. Previous work on assessing their performance has concentrated on the benefits of individual enhancements or a few combinations.

However, each enhancement should not be taken in isolation; one would like to find the combination that provides the greatest reduction in tree size. Results indicate that the history heuristic and transposition tables significantly out-perform other alpha-beta enhancements in application generated game trees. For trees up to depth 8, when taken together, they account for over 99% of the possible reductions in tree size, with the other enhancements yielding insignificant gains.

HEUR92
T. Anthony Marsland, Alexander Reinefeld
Heuristic Search in One and Two Player Games
October 4, 1992

Learning

LRN97
Johannes Fürnkranz
Machine Learning in Computer Chess: The Next Generation
Austrian Research Institute for Artificial Intelligence, Vienna, TR-96-11, 1996. (74KB Compressed PostScript)

Abstract

Ten years ago the ICCA Journal published an overview of machine learning approaches to computer chess (Skiena,1986). The author's results were rather pessimistic. In particular he concludes that with the exception of rote learning in the opening book few results have trickled into competitive programs and that there appear no research projects on the horizon which offer reason for optimism. In this paper we will update Skiena's work with research that has been conducted in this area since the publication of his paper. By doing so we hope to show that at least Skiena's second conclusion is no longer valid.


Parallel chess

PAR94a
Chris Joerg and Bradley C. Kuszmaul
Massively Parallel Chess,
presented at the Third DIMACS Parallel Implementation Challenge held at Rutgers University, October 16-17, 1994. (102KB of compressed postscript, 218KB uncompressed)

Abstract

Computer chess provides a good testbed for understanding dynamic MIMD-style computations. To investigate the programming issues, we engineered a parallel chess program called *Socrates, which running on the NCSA's 512 processor CM-5, tied for third in the 1994 ACM International Computer Chess Championship. *Socrates uses the Jamboree algorithm to search game trees in parallel and uses the Cilk 1.0 language and run-time system to express and to schedule the computation. In order to obtain good performance for chess, we use several mechanisms not directly provided by Cilk, such as aborting computations and directly accessing the active message layer to implement a global transposition table distributed across the processors. We found that we can use the critical path C and the total work W to predict the performance of our chess programs. Empirically *Socrates runs in time T ~ 0.95C + 1.09W/P on P processors. For best-ordered uniform trees of height h and degree d the average available parallelism in Jam-boree search is Q((d/2)^0.5 ). *Socrates searching real chess trees under tournament time controls yields average available parallelism of over 1000.

PAR94b
Bradley C. Kuszmaul
The StarTech Massively Parallel Chess Program,
January, 1995. (313K of postscript)

Abstract

The StarTech massively parallel chess program, running on a 512-processor Connection Machine CM-5 supercomputer, tied for third place at the 1993 ACM International Computer Chess Championship. StarTech employs the Jamboree search algorithm, a natural extension of J. Pearl's Scout search algorithm, to find parallelism in game-tree searches. StarTech's work-stealing scheduler distributes the work specified by the search algorithm across the processors of the CM-5. StarTech uses a global transposition table shared among the processors. StarTech has an informally estimated rating of over 2400 USCF.

Two performance measures help in understanding the performance of the StarTech program: the work, W, and the critical path length, C. The Jamboree search algorithm used in StarTech seems to perform about 2 to 3 times more work than does our best serial implementation. The critical path length, under tournament conditions, is less than 0.1% of the total work, yielding an average parallelism of over 1000. The StarTech scheduler achieves actual performance of approximately T ~ 1.02W/P + 1.5C on P processors. The critical path and work can be used to tune performance by allowing development of the program on a small, readily accessable, machine while predicting the performance on a big, tournament-sized, machine.

PAR94c
Jean-Christophe Weill, Marc-Francois Baudot, Jean-Luc Seret
Parallelisation d'algorithmes minimax sur CM-5
Note EDF/DER/TIEM/IMA Hi-23/94-004
PAR95
Lewis Benjamin Stiller
Exploiting symmetries on parallel architecture
Dissertation John Hopkins University 1995

Abstract

This thesis describes techniques for the design of parallel programs that solve well-structured problems with inherent symmetry.

Part I demonstrates the reduction of such problems to generalized matrix multiplication by a group-equivariant matrix. Fast techniques for this multiplication are described, including factorization, orbit decomposition, and Fourier transforms over finite groups. Our algorithms entail interaction between two symmetry groups: one arising at the software level from the problem's symmetry and the other arising at the hardware level from the processors' communication network.

Part II illustrates the applicability of our symmetry-exploitation techniques by presenting a series of case studies of the design and
implementation of parallel programs.

First, a parallel program that solves chess endgames by factorization of an associated dihedral group-equivariant matrix is described. This code runs faster than previous serial programs, and discovered a number of results.

Second, parallel algorithms for Fourier transforms for finit groups are developed, and preliminary parallel implementations for group transforms of dihedral and of symmetric groups are described. Applications in learning, vision, pattern recognition are proposed.

Third, parallel implementations solving several computational science problems are described, including the direct n-body problem, convolutions arising from molecular biology, and some communications primitives such as broadcast and reduce. Some of our implementations ran orders of magnitude faster than previous techniques, and were used in the investigation of various physical phenomena.


Tree Search

TS90
Victor Allis
Searching for Solutions in Games and Artificial Intelligence
Thesis

Abstract

Thinking games are investigated from the perspective of research in Artificial Intelligence. In the introduction the relevance of such investigations is discussed. Then, two new search techniques, proof-number search and dependency-based search are described, which have been developed while solving the games connect-four (see M.Sc. thesis by the same author), qubic and go-moku. Finally, the complexity and state-of-the-art in computer game playing is described for each of the games in the so-called Olympic List of 15 well-known games, leading to a prediction for the future of games.

TS93a
Wim Pijls, Arie de Bruin
A framework for game tree algorithms
Erasmus University Report 93-4

Abstract

A unifying framework for game tree algorithms is GSEARCH, designed by Ibaraki [Ibaraki 86]. In general, a relatively great deal of memory is necessary for instances of this framework. In [Ibaraki 91A] an extended framework, called RSEARCH, is discussed, in which the use of memory can be controlled. In this paper variants of above frameworks are introduced, to be called Gsearch and Rsearch respectively. It is shown that, in these frameworks, the classical alpha-beta algorithm is the depth-first search instance and H* is a best first search instance. Furthermore two new algorithms, Maxsearch and Minsearch, are presented, both as best-first search instances. Maxsearch is close to SSS* [Stockman] and SSS-2 [Pijls-2], whereas Minsearch is close to dual SSS*.

TS93b
Eric B. Baum, Warren D. Smith
Best Play for Imperfect Players and Game Tree Search
May 19, 1993
TS94a
Arie de Bruin, Wim Pijls, Aske Plaat
Solution Trees as a Basis for Game Tree Search
May 19, 1994.

Abstract

A game tree algorithm is an algorithm computing the minimax value of the root of a game tree. Many algorithms use the notion of establishing proofs that this value lies above or below some boundary value. We show that this amounts to the construction of a solution tree. We discuss the role of solution trees and critical trees in the following algorithms: Principal Variation Search, alpha-beta, and SSS-2. A general procedure for the construction of a solution tree, based on alpha-beta and Null-Window-Search, is given. Furthermore two new examples of solution tree based-algorithms are presented, that surpass alpha-beta i.e., never visit more nodes than alpha-beta, and often less.

TS94b
Aske Plaat, Jonathan Schaeffer, Wim Pijls and Arie De Bruin.
A New Paradigm for Minimax Search
Tech. Rep. 94-18, Department of Computing Science, University of Alberta, 1994. (Compressed postscript, 97 kbytes)

Abstract

This paper introduces a new paradigm for minimax game-tree search algorithms. MT is a memory-enhanced version of Pearl's Test procedure. By changing the way MT is called, a number of best-first game-tree search algorithms can be simply and elegantly constructed (including SSS*).

Most of the assessments of minimax search algorithms have been based on simulations. However, these simulations generally do not address two of the key ingredients of high performance game-playing programs: iterative deepening and memory usage. This paper presents experimental data from three game-playing programs (checkers, Othello and chess), covering the range from low to high branching factor. The improved move ordering due to iterative deepening and memory usage results in significantly different results from those portrayed in the literature. Whereas some simulations show Alpha-Beta expanding almost 100% more leaf nodes than other algorithms [9], our results showed variations of less than 20%.

One new instance of our framework (MTD-f) out-performs our best alpha-beta searcher (aspiration NegaScout) on leaf nodes, total nodes and execution time. To our knowledge, these are the first reported results that compare both depth-first and best-first algorithms given the same amount of memory.

TS95a
Aske Plaat, Jonathan Schaeffer, Wim Pijls and Arie De Bruin.
An Algorithm Faster than NegaScout and SSS* in Practice
Paper presented at Advances in Computer Chess 7, Hong Kong, 1995.

Abstract

In this paper we introduce a framework for best first search of minimax trees. Existing best first algorithms like SSS* and DUAL* are formulated as instances of this framework. The framework is built around the Alpha-Beta procedure. Its instances are highly practical, and readily implementable. Our reformulations of SSS* and DUAL* solve the perceived drawbacks of these algorithms. We prove their suitability for practical use by presenting test results with a tournament level chess program.

In addition to reformulating old best first algorithms, we introduce an improved instance of the framework: MTD(f). This new algorithm outperforms NegaScout, the current algorithm of choice of most chess programs. Again, these are not simulation results, but results of tests with an actual chess program, Phoenix.

TS95b
Aske Plaat, Jonathan Schaeffer, Wim Pijls and Arie De Bruin.
Best-First Fixed-Depth Game-Tree Search in Practice
Paper to be published in IJCAI'95, Montreal.

Abstract

We present a new paradigm for minimax search algorithms: MT, a memory-enhanced version of Pearl's Test procedure. By changing the way MT is called, a number of practical best-first search algorithms can be simply constructed. Reformulating SSS* as an instance of MT eliminates all its perceived implementation drawbacks. Most assessments of minimax search performance are based on simulations that do not address two key ingredients of high performance game-playing programs: iterative deepening and memory usage. Instead, we use experimental data gathered from tournament checkers, Othello and chess programs. The use of iterative deepening and memory makes our results differ significantly from the literature. One new instance of our framework, MTD(f), out-performs our best Alpha-Beta searcher on leaf nodes, total nodes and execution time. To our knowledge, these are the rst reported results that compare both depth-first and best-first algorithms given the same amount of memory.

TS95c
Aske Plaat, Jonathan Schaeffer, Wim Pijls and Arie De Bruin.
A Minimax Algorithm Better than Alpha-Beta? No and Yes
Tech. Rep. 95-15, Department of Computing Science, University of Alberta, June 1995.

Abstract

This paper has three main contributions to our understanding of fixed-depth minimax search:

(A) A new formulation for Stockman's SSS* algorithm, based on Alpha-Beta, is presented. It solves all the perceived drawbacks of SSS*, finally transforming it into a practical algorithm. In effect, we show that SSS* = Alpha-Beta + transposition tables. The crucial step is the realization that transposition tables contain so-called solution trees, structures that are used in best-first search algorithms like SSS*. Having created a practical version, we present performance measurements with tournament game-playing programs for three different minimax games, yielding results that contradict a number of publications.

(B) Based on the insights gained in our attempts at understanding SSS*, we present a framework that facilitates the construction of several best-first fixed-depth game-tree search algorithms, known and new. The framework is based on depth-first null-window Alpha-Beta search, enhanced with storage to allow for the refining of previous search results. It focuses attention on the essential differences between algorithms.

(C) We present a new instance of the framework, MTD(f). It is well-suited for use with iterative deepening, and performs better than algorithms that are currently used in most state-of-the-art game-playing programs. We provide experimental evidence to explain why MTD(f) performs better than the other fixed-depth minimax algorithms.


Endgames

ENDG94
Jean-Christophe Weill
How Hard is the Correct Coding of an Easy Endgame

[NedStat counter] [Enhanced for Lynx]
[XS4ALL] [Home] [Statistics] [Up] [Previous] [Next]
Comments to: Paul Verhelst ([email protected]) / finger me