AlphaBetaAdvanced

Uses the Alpha-Beta Pruning algorithm to play a move in a game of Tic Tac Toe but includes depth in the evaluation function.

The vanilla MiniMax algorithm plays perfectly but it may occasionally decide to make a move that will results in a slower victory or a faster loss. For example, playing the move 0, 1, and then 7 gives the AI the opportunity to play a move at index 6. This would result in a victory on the diagonal. But the AI does not choose this move, instead it chooses another one. It still wins inevitably, but it chooses a longer route. By adding the depth into the evaluation function, it allows the AI to pick the move that would make it win as soon as possible.

Modified version of LazoCoder's Tic-Tac-Toe Java Implementation, GPLv3 License.

Functions

Link copied to clipboard
fun run(board: TicTacToeSolver.Board, ply: Double = Double.POSITIVE_INFINITY): Int

Play using the Alpha-Beta Pruning algorithm. Include depth in the evaluation function and a depth limit.