DeepMind's AlphaZero AI beats the world's top board game-playing AI models

Discussion in 'Industry News' started by Calliers, Dec 7, 2018.

  1. Calliers

    Calliers HH's MC Staff Member

    Joined:
    Oct 12, 2004
    Messages:
    39,098
    Likes Received:
    2,509
    Trophy Points:
    139
    AI seems to be getting smarter and more efficient by the day. It can already beat even the world's best human players in games like Go, Chess, and DOTA 2, but it seems taking on mere humans isn't enough for one AI.

    DeepMind's AlphaZero has grown tired of flesh opponents and decided to shift its focus to the (metaphorical) destruction of other AIs. It's been pitted against some of the world's top machine learning models across several different board games, and it has managed to come out on top consistently.

    Specifically, AlphaZero has conquered DeepMind's other top board game-playing AI, AlphaGo Zero, in addition to world-champion chess AI Stockfish, and "elmo," an AI Shogi champion. Amazingly, AlphaZero was trained in a mostly hands-off manner with "no human intervention," according to Cnet.
    ____________________
    Source: techspot
     
  2. IvanV

    IvanV HH Assassin Guild Member

    Joined:
    Dec 18, 2004
    Messages:
    10,032
    Likes Received:
    1,376
    Trophy Points:
    138
    Alpha Zero is an interesting... entity. It's triumph over Stockfish (apart from AZ, world's strongest chess engine) rocked the world of chess and, while there is some controversy regarding the match, the games were really, really interesting.

    The peculiar thing about Alpha Zero is that it does not rate a position the way normal chess engines do. They count the material, then look for things such as kings' safety and score the position based on that in "pawns" (so, for example, a score of 3 means that white has an advantage equivalent to having three extra pawns). In order to get a more accurate evaluation, they systematically search through the tree of possible positions (starting from the current one) as deeply as they can, evaluating the resulting positions and picking the best branch and the best move to make in the current position.

    Alpha Zero on the other hand assigns each position a probability representing winning chances, based on its previous experience in similar positions. That is, to my understanding, a lot more similar to how a human plays chess: strong grandmasters do calculate (and are good at it), but they are also exceptionally good at pattern recognition and they can give a quick evaluation of a position simply by looking at it. For example, they see pawn structures of black and white and know whether an endgame will favour one side or the other, despite equal material and lack of any immediate threats.

    Thanks to such approach, Alpha Zero was able to crush Stockfish in their match, but an interesting thing happened during the just finished match for the (human) world chess champion. In one of the games, there was an endgame which seemed promising for the challenger, Fabiano Caruana, and in fact, in one moment, a position appeared in the game in which Stockfish very quickly declared that Caruana has a mate in 30 (meaning 30 moves by each white and black, so 60 ply). A few days after the game, analysis employing Alpha Zero was made available, and AZ did not find the checkmate. The explanation is in the aforementioned ways in which the two engines operate: Stockfish performed a search through the tree of possible positions and (thanks to modern hardware) quickly found the mate; Alpha Zero on the other hand didn't have experience with that type of endgames and simply did not see it.
     
    Calliers likes this.

Share This Page

visited