The continued development of evaluation functions for use in chess and shogi engines resulted in the development of Efficiently Updatable Neural Networks in 2018 by Yu Nasu. These utilise the full potential of modern processors foregoing the need for specialised hardware and thus decreasing cost and energy consumption. There are three central optimisations, leveraging the sparsity and redundancy in the encoding, lowering the bit width and pivoting all calculations to integers, and lastly using advanced vectorisation with single instruction multiple data registers. These optimisations are evaluated for their contribution to Efficiently Updatable Neural Networks and how they could impact efficiency and speed in different environments. Finally, the optimisations are implemented in Python and C++ to test their real-world benefits.
«
The continued development of evaluation functions for use in chess and shogi engines resulted in the development of Efficiently Updatable Neural Networks in 2018 by Yu Nasu. These utilise the full potential of modern processors foregoing the need for specialised hardware and thus decreasing cost and energy consumption. There are three central optimisations, leveraging the sparsity and redundancy in the encoding, lowering the bit width and pivoting all calculations to integers, and lastly using a...
»