*** Welcome to piglix ***

Libratus


Libratus is an artificial intelligence computer program designed to play Poker, specifically no-limit Texas hold 'em. Libratus' creators intend for it to be generalisable to other, non-Poker-specific applications.

While Libratus was written from scratch, it is the nominal successor of Claudico. Like its predecessor, its name is a Latin expression and means 'balanced'.

Libratus was built with more than 15 million core hours of computation as compared to 2-3 million for Claudico. The computations were carried out on the new 'Bridges' supercomputer at the Pittsburgh Supercomputing Center. According to one of Libratus' creators, Professor Tuomas Sandholm, Libratus does not have a fixed built-in strategy, but an algorithm that computes the strategy. The technique involved is a new variant of counterfactual regret minimization, namely the CFR+ method introduced in 2014 by Oskari Tammelin. On top of CFR+, Libratus used a new technique that Sandholm and his PhD student, Noam Brown, developed for the problem of endgame solving. Their new method gets rid of the prior de facto standard in Poker programming, called "action mapping".

As Libratus plays only against one other human or computer player, the special 'heads up' rules for two-player Texas hold 'em are enforced.

From January 11 to 31, 2017, Libratus was pitted in a tournament against four top-class human poker players, namely Jason Les, Dong Kim, Daniel McAulay and Jimmy Chou. In order to gain results of more statistical significance, 120,000 hands were to be played, a 50% increase compared to the previous tournament that Claudico played in 2015. To manage the extra volume, the duration of the tournament was increased from 13 to 20 days.

The four players were grouped into two subteams of two players each. One of the subteams was playing in the open, while the other subteam was located in a separate room nicknamed 'The Dungeon' where no mobile phones or other external communications were allowed. The Dungeon subteam got the same sequence of cards as was being dealt in the open, except that the sides were switched: The Dungeon humans got the cards that the AI got in the open and vice versa. This setup was intended to nullify the effect of card luck.

The prize money of $200,000 was shared exclusively between the human players. Each player received a minimum of $20,000, with the rest distributed in relation to their success playing against the AI. As written in the tournament rules in advance, the AI itself did not receive prize money even though it won the tournament against the human team.


...
Wikipedia

...