Computer chess: Difference between revisions

Content deleted Content added
No edit summary
No edit summary
Line 17:
The second camp took a "[[brute force search]]" approach, examining as many positions as possible using the [[minimax algorithm]] with only the most basic evaluation function. A program might, for example, pay attention only to checkmate, which side has more pieces, and which side has more possible moves, without any attempt at more complicated positional judgement. In compensation, the program would be fast enough to look exhaustively at all positions to a certain depth within its allotted time.
 
Use of [[alpha-beta pruning]] combined with a number of [[search heuristic]]s dramatically improved the performance of brute-force search algorithms. In modern times, the general consensus is that chess is theoretically a nearly-understood paradigm as an [[artificial intelligence|AI]] design goal and the [[JapanChina|JapaneseChinese]] game of [[game of go|go]] is now at the forefront of challenges to AI designers.
 
Ultimately, the brute force camp won out, in the sense that their programs simply played better chess. The game of chess is not conducive to inerrantly discriminating between ''obviously'' bad, trivial and good moves using a rigid set of rules. Traps are set and sprung by expert players who understand and master the many levels of depth and irony inherent to the game. Furthermore, technological advances by orders of magnitude in processing power have made the brute force approach far more incisive in recent years than was the case in the early years. The result is that a very solid, tactical AI player has been built which is errorless to the limits of its search depth and time. This has left the strategic AI approach universally recognized as obsolete. It turned out to produce better results, at least in the field of chess, to let computers do what they do best (i.e., calculate) rather than coax them into imitating human thought processes and knowledge.