Monday, January 30, 2012

Player 2.4q: for filtering

For the multiple ply calculations I looked at a quick-calculating but coarse player, Player 2.3q, that used the old formulation of the primes inputs.

I just switched that to Player 2.4q, which is the same as 2.3q but uses the new Berliner primes inputs formulation. I trained it for 300k games, using alpha=0.1 for the first 100k and then switching to alpha=0.02.

Except for using only five hidden nodes, it is identical in structure to Player 2.4 (which uses 80 hidden nodes).

In 100k cubeless money games it scores +0.122ppg +/- 0.004ppg against PubEval and wins 54.0% +/- 0.1% of the games.

It performed a little better than Player 2.3q, which might just be noise, or might suggest that the Berliner primes formulation makes more of a difference when there are a small number of hidden nodes. That is, for a larger number of hidden nodes, the network can approximate the value of primes itself and does not need to have it spelled out through a custom input. But the small incremental performance gain makes it difficult to have much confidence in such a statement.


No comments:

Post a Comment