Now Reading
The Chess Player That Mocked Her Opponent

The Chess Player That Mocked Her Opponent

Credit: obsidianphotography/pixabay

The Sinquefield Cup in St Louis, Missouri, held between August 16 and 29, had some of the world’s best chess players in attendance – including Magnus Carlsen, Fabiano Caruana and Viswanathan Anand – matched in games that had the audience riveted. However, in terms of skill alone, the most contested game of chess in these two weeks didn’t happen in the port city.

Computers have been better than humans at chess since IBM’s Deep Blue beat Garry Kasparov in 1997. Since then, the computers have become much better and have defeated the world’s human champions at Go and even DotA. Chess-engine tournaments have also been organised since 2011, making a show of how well computers could play but more importantly the computational prowess they could muster. These tournaments don’t enjoy the same spectatorship as human chess does because the machines bring with them an attitude of “just solving a problem” and none of the intrigue or passion, rendering the clashes a protracted academic exercise.

Even so, the computer v. computer game being played at the Chess.com HQ in California on August 23 was different. Leela Chess Zero, an open-source chess engine, appeared to be mocking its opponent, the Chiron Chess Engine,  after securing an unassailable lead. Leela uses deep reinforcement learning to play and become better at the game, and this technology is much better than the one powering Chiron, a brute-force algorithm.

Leela had taught herself to play chess from scratch, getting better by competing in millions of games against a copy of herself, a method called competitive self-play. On the other hand, ‘conventional’ chess engines like Chiron learn from a diet of completed chess games between grandmasters, and endgame tables.

In California, Leela had played over a hundred perfect moves to bring her opponent to its virtual heels. Then, suddenly, she began to move away from conventional tactics and began to make patently sub-optimal moves. From a position where she could have checkmated her opponent in 20 moves, Leela went on to sacrifice her queen twice, promoted a pawn to less than a queen, gave away and a rook and a knight before she finally finished Chiron off with the shortest possible mate.

Had the audience just witnessed an AI engaging in ridicule? Because this kind of behaviour is unanticipated. Engines like Leela – similar to Google’s AlphaGo and OpenAI’s Five – are built to win at specific games, not exhibit ego. Observers also couldn’t miss the similarities between her and Google’s Alpha Zero, after whom she is designed.

In December 2017, Alpha Zero played Stockfish, the strongest conventional engine (although Google didn’t use the best version) at the time, built by the same developer who led Leela’s engineering. Alpha Zero had taught itself to play chess in four hours of competitive self-play. It proceeded to vanquish Stockfish while often displaying signs of an intuition that prompted comments that it had played more like a human.

Leela’s style is similar: she plays with human motifs that makes her intelligible to human players. However, it remains unclear how Leela developed these human-like tendencies and how much further she could go. In fact, it is also unclear if observers are simply anthropomorphising her in an effort to explain her behaviour.

Speculation without more data on this front is risky because of how humans tend to perceive AI: sometimes as friendly (such as when a computer is addressed with the female gender pronoun) but at many others as malevolent and untrustworthy. The antagonist in the film 2001: A Space Odyssey is a prime example of this conception, rooted in historical misgivings about how the pursuit of AI often flattens what it means to be human. Experts have cautioned that such depictions could foster hysteria and disbelief about AI’s possibilities.

The team of programmers behind Leela has advanced one possible cause: the addition of endgame tablebases to Leela’s playing algorithm for the first time. Endgame tables are used to determine whether a given position of pieces on the chessboard can lead to victory. Engines use them to determine their prospects and ‘execute’ the table for the shortest win. Leela’s programmers clarified, however, that she was only given access to the table that said if a position is ‘won’, ‘lost’ or ‘drawn’ but not how.

They have thus concluded that Leela was likely transitioning to an endgame table position that she had not learnt to play perfectly yet. It remains to be seen if, once she figures the endgame out, her playfulness – or what we think is that – will still be there.

Binit Priyaranjan is a student of literature at the Delhi University and a freelance writer.

Scroll To Top