Quick Results

Full Reports


Contact:

info@
onlinepokerwatchdog
.com

YOUR
ADVERT
HERE



















































































































































































































































































PokerStars Bad Beats? Full report

One common theory of how online poker could be rigged is the 'Bad Beat Theory'. In 'bad beat theory' good players receive more 'bad beats' than they should and poor players 'get lucky' too often when all the chips go in.

It has been argued that a site rigged in this way could increase its profit because the 'fish' would lose their money less quickly, the 'sharks' would win more slowly (and cash-out less often) and more hands would be played.

 In this experiment we put PokerStars to the test to see if good players receive more 'bad beats' than they should...

Hypotheses

Null Hypothesis: Pokerstars is fair. The cards are dealt randomly.

Alternate Hypothesis: Pokerstars is rigged. The community cards are biased to favour losing players in 'all-in' situations.

Top of Page ♠

Explanation

In this test we anlaysed the results of 'heads up, pre-flop all-in' hands, i.e. hands in which exactly 2 players were 'all-in' before the flop. We compared the 'Expected number of hands won' with the 'Actual Number of hands won' over a large sample size. We also compared the 'Expected Equity' (the percentage a hand should win if played an infinite number of times) with the 'Actual Equity' (the percentage the hands actually won).

If our null hypothesis was true (i.e. Pokerstars is fair): the 'Expected Equity' would have been very close to the 'Actual Equity' for a very large sample of hands.

Since we wished to compare results for winning and losing players it wa necessary to make some assumptions:

 Assumption1: good players more often than not get 'their chips in ahead', i.e. their hands will (on average) have an 'Expected Equity' of greater than 50%. Also, poor players more often than not get 'their chips in bad', i.e. their hands will (on average) have an 'Expected Equity' of less than 50%.

Assumption2: a good player's hand will more often than not 'dominate' their opponent's hand than vice-versa in pre-flop all-ins, i.e. their hands will have an 'Expected Equity' of between 68% and 83% more often than they will have an 'Expected Equity' of between 17% and 32%. For more information on 'dominated hands' click here.

Over a large sample if our alternate hypothesis and assumptions were correct:

1) Hands with an 'Expected Equity' of greater than 50% would have an 'Actual Equity' which is less than their 'Expected Equity'. Also, hands with an 'Expected Equity' of less than 50% would have an 'Actual Equity' which is greater than their 'Expected Equity'.

2) Hands with an 'Expected Equity' of 68-83% would have an 'Actual Equity' which is less than their 'Expected Equity'. Also, hands with an 'Expected Equity' of 17-32% would have an 'Actual Equity' which is greater than their 'Expected Equity'.

Top of Page ♠

Dataset

For this test we used 1 million hands from $3+0.40, 9 player, standard, 'Sit and Go' tournaments played at Pokerstars. The hand histories were purchased from PTR* and should consist of at least 95% of hands played in these tournaments between 5th April 2011 and 1st May 2011.

Single table 'Sit and Gos' were chosen because these tournaments have a high proportion of 'pre-flop all-ins' and therefore they provide a large sample of hands for analysis. Also, unlike 'cash games' or 'deep stack' tournaments the results of these tournaments tend to be largely determined by 'pre-flop all-ins'.

$3 tournaments were chosen because the hand histories were more reasonably priced than higher buy-in 'Sit and Gos' and an almost complete sample of hand histories could be obtained for hands played over a one month period (this was not the case with $1 'SnGs').

Top of Page ♠

Method

The hand histories were imported into Poker Tracker* and we used the custom report feature to filter and sort the data. Many thanks to 'White Rider' and 'Kraada' at Poker Tracker who provided excellent support and invaluable knowledge during this process.

The data was filtered to remove all hands that were not 'heads up, preflop all-ins'. The remaining hands were filtered by 'all-in call', i.e. the hands would be viewed from the perspective of of the player that called the 'all-in' bet and all duplicate hands from other players' perspectives were removed.

The outputs from the custom report were set as:

Hand I.D.: the unique reference number that Pokerstars gives each hand. These were counted to produce the 'Total Number of Hands'.

Date: the date the hand was played.

Player: the screen name of the player that called the 'all-in' bet.

Hole Cards: the pocket cards of the 'all-in caller'.

Expected All-in Equity: this is expressed as a percentage, i.e. it is the probability of the caller winning the hand (p) multiplied by 100. This value is calculated by Poker Tracker using a 'Monte Carlo' method and therefore there are slight errors associated with each figure, for more details see the limitations and discussion section.

 Winner: The screen name of the player that won the hand.

Actual Result: if the hand was won value of 1 was produced and if the hand was lost a value of 0 was produced, by default split pots were recorded as won and therefore received a value of 1. These were summed to give the 'Total Number of Hands Won (including split pots)'.

Split Pots: if a hand resulted in a split pot a value of 1 was produced. These were summed to give the 'Total Number of Split Pots'.

The outputs were set to be ordered by 'Expected All-in Equity' so that the hands with the lowest expected all-in equity would appear at the top of the list running down to the hands with the highest expected all-in equity.

If you want to run this analysis on your own tournament hand histories you can download the custom report by clicking on the icon below and then importing into Poker Tracker* as normal:

Download Here:

Top of Page ♠

Results

Upon import it was found there were errors when importing a small number of hand histories and there were also a small amount of duplicate hands. These accounted for at most a few thousand of the original 1 million hands and had a negligible effect on sample size.

The report was run and 69750 'heads-up, pre-flop all-ins' were produced. 88 of these hands were missing the output for 'Expected All-in Equity' (probably due to a 'bug' in Poker Tracker's equity calculating program) so the total number of usable 'All-in' hands produced from the original sample of 1 million hands was 69663.

These hands were exported to 5 spreadsheets for analysis. The first was left unchanged and comprised all 69663 hands output from the report. The rest were divided into hands that were 'ahead' preflop, 'behind' preflop,  'dominating' and 'dominated' hands:

Number of 'ahead' hands (>50% expected all-in equity) = 36101

Number of 'behind' hands (<50% expected all-in equity) = 33524

Number of 'dominating' hands (68%<expected all-in equity<83%) = 14766

Number of 'dominated' hands (17%<expected all-in equity<32%) = 12241

To view the full results in PDF format click on an icon below (NOTE: the summary is at the bottom of the last page of each spreadsheet).

ALL hands

AHEAD hands

BEHIND hands

DOMINATING hands

DOMINATED hands

It is worth noting that the number of 'ahead' hands is significantly greater than the number of 'behind' hands and also that the number of 'dominating' hands is greater than 'dominated' hands. This can be explained by the facts that:

(a) we originally filtered our hands by 'all-in' caller, (i.e. the hand is viewed from the perspective of the player that called the 'all-in') and

(b) that a significant proportion of players use the 'gap concept' when making 'all-in' decisions. Put simply, the 'gap concept' is the idea that it requires a stronger hand to call an 'all-in' bet than it does to be the initial bettor because the initial bettor has some 'folding equity' while the caller has none.

When viewing the spreadsheets be aware of the formatting of hole cards. Each hole card has a number which represent the card: 2c = 1, 3c = 2, 4c= 3... Ac = 13, 2d = 14, 3d = 15... Ad = 26, 2h = 27... Ah = 39, 2s = 40... As = 52).

Top of Page ♠

Analysis

On each spreadsheet another column was added:

p(1-p) where p is the probability of the caller winning the hand. This value was calculated from the all-in equity and was summed to give:∑[p(1-p)].

The following outputs were used in calculations:

Total number of hands, n

Number of hands won (incl. split pots), w

Number of split pots, s

The following calculations were carried out in order to compare the actual number of hands won to the expected number of hands won:

The mean expected equity, x (%) was calculated by summing the value of 'expected all-in equity' for every hand and dividing the total by the number of hands, n.

Actual (effective) number of hands won, z = w - (s/2). It was necessary to adjust the number of hands won to take into consideration the number of split pots. Since all the hands were 'heads-up' split-pots were considered to have an 'actual equity' of 0.5 when compared with a value of 1 for a hand that was won and 0 for a hand that was lost. The number of hands won already contained a value of 1 for every split pot and therefore the 'effective' number of hands won was calculated using the formula shown. 

Expected number of hands won, e = xn/100 was calculated in order that this could then be compared to the actual number of hands won.

Actual deviation = z-e. The deviation of the actual number of hands won from the expected number was calculated by simply subtracting one from the other.

Standard Deviation =  √∑[p(1-p)]. To see if the actual deviation from the expected results was within reasonable limits the standard deviation of the population was calculated. In order to achieve this it was assumed that the population behaved as a binomial distribution. In reality the population is an imperfect binomial distribution since the probability of success, p, varied for each hand. In a perfect binomial distribution the "probability of success of each event, p must be the same for each trial". For more on this see this discussion.

Actual mean equity, a = (z/n)100 was also calculated so that it could be compared to the expected mean equity.

To view the full results in PDF format click on an icon below (NOTE: the summary is at the bottom of the last page of each spreadsheet):

ALL hands

AHEAD hands

BEHIND hands

DOMINATING hands

DOMINATED hands

Top of Page ♠

Results of 'Bad Beat' Analysis

Below is a table of the results for each of the data groups:

Data Group Equity Range (%) Total No. Hands Expected No. Hands Won Actual No. Hands Won Actual Deviation Standard Deviation Mean Expected Equity Actual Mean Equity
All Hands 0-100 69663 35748 35850 +102 122 51.32 51.46
Ahead 50-100 36101 24002 24093 +91 88 66.49 66.74
Behind 0-50 33524 11726 11738.5 +12 85 34.98 35.02
Dominating 68-83 14766 10967 11041 +74 53 74.27 74.77
Dominated 17-32 12241 3165 3181 +16 48 25.86 25.99

From these results we can see that the actual number of hands won is very close to the expected number of hands won for every data group. In all cases the actual deviation is well within 2 standard deviations and there are no discrepancies between the 'ahead' and 'behind' samples or the 'dominating' and 'dominated' samples.

 Also, since hands were filtered by 'all-in' caller, the 'ALL HANDS' data group has tested for biases between players that originally made an all-in bet and players that 'called' an all-in bet. The results show no bias in this respect either.

Top of Page ♠

Conclusion

If we return to our hypotheses: The evidence shows that our alternate hypothesis is incorrect for this sample of hands, i.e. the community cards are not biased to favour losing players in all-in situations. Also, it is a fair assumption that this sample of hands is representitive of the general population of hands played at $3 'Sit and Go' tournaments at Pokerstars at the present time.

We can therefore conclude that for $3 'Sit and Go's' our null hypothesis is correct and that Pokerstars is fair (with respect to 'bad beats') at the present time.

Top of Page ♠

Limitations & Discussion

This test was performed on a specific game type ($3 SnGs) during a specific period (April 2011) at a specific poker site (PokerStars) and the results can be considered true for these conditions only. Although these results are relevant to online poker in general they do not test other circumstances. Other poker sites may use different methods for the distribution of cards and other game types (e.g. cash hold'em) or levels (e.g. $10 SnG) at PokerStars could also use different programs for the deal. It is also true that the method of dealing at a given site could change in the future as the site updates. Online Poker Watchdog intends to perform this test on other poker sites for a variety of games and levels.

Also, this analysis only tests one aspect of potential rigging, i.e. the 'Bad Beat' theory. In theory there are many other ways that a poker site could be rigged that this test doesn't examine, for example it doesn't test the distribution of hole cards between players in any way. Online Poker Watchdog intends to perform further analyses, designed to test other theories of potential rigging of online poker.

One area of potential debate in this analysis is the assumptions about 'winning' and 'losing' players, i.e.: 1) good players more often than not get 'their chips in ahead' and 2) a good player's hand will more often than not 'dominate' their opponent's hand than vice-versa in pre-flop all-ins. Since we have filtered hands on 'all-in caller' we are effectively comparing 'all-in' calls, for example: when we compare the data-groups 'ahead' and 'behind' we are comparing 'good calls' with 'bad calls'. Therefore, we are effectively comparing players that are 'good callers' with players that are 'bad callers'.

 In this regard the main point to be aware of is that this is just one aspect that makes a winning 'Sit and Go' player. There are other skills that make a 'winning' player, such as: the ability to steal binds, good defence from the blinds and good 'long stack' play in the early rounds. However, although making 'good all-in calls' does not completely define a winning poker player it is a major factor that contributes to being a winning player. This is especially true for single table 'Sit and Gos' as the blind structure dictates that much or the game involves pre-flop all-in play.

 Another area that has raised questions is the completeness and legitimacy of the original dataset. Hand histories were purchased from an independent 3rd party and although it is never possible to be 100% sure of the reliability of a 3rd party source there is very little reason to believe that the hand histories are anything but legitimate. The dataset is not complete but is alleged to be at least 95% of all the hands played in the tournaments described above.

Questions have been raised as to the effect of missing hand histories on the analysis. Obviously, if the hands were missed randomly then we simply have a slightly smaller sample size and this wouldn't change our results at all. However, it is possible that the missed hands were due to a software glitch when they were 'datamined' and that there is a pattern behind why they were missed. For this to adversly effect the analysis the missed hands would have to share some characteristic which meant that their actual equities and expected equities were related differently from the rest of the sample. This is highly unlikely.

Top of Page ♠