Quick Results

Full Reports



contact:

info@
onlinepokerwatchdog
.com

YOUR
ADVERT
HERE














































































































































































































































































Ongame Bad Beats? Full report

One common theory of how online poker could be rigged is the 'Bad Beat Theory'. In 'bad beat theory' good players receive more 'bad beats' than they should and poor players 'get lucky' too often when all the chips go in.

It has been argued that a site rigged in this way could increase its profit because the 'fish' would lose their money less quickly, the 'sharks' would win more slowly (and cash-out less often) and more hands would be played.

 In this experiment we put the Ongame Network to the test to see if good players receive more 'bad beats' than they should...

Hypotheses

Null Hypothesis: Ongame is fair. The cards are dealt randomly.

Alternate Hypothesis: Ongame is rigged. The community cards are biased to favour losing players in 'all-in' situations.

Top of Page ♠

Explanation

In this test we anlaysed the results of 'heads up, pre-flop all-in' hands, i.e. hands in which exactly 2 players were 'all-in' before the flop. We compared the 'Expected number of hands won' with the 'Actual Number of hands won' over a large sample size. We also compared the 'Expected Equity' (the percentage a hand should win if played an infinite number of times) with the 'Actual Equity' (the percentage the hands actually won).

If our null hypothesis was true (i.e. Ongame is fair): the 'Expected number of hands won' would have been very close to the 'Actual number of hands won' for a large sample of hands.

Since we wished to compare results for winning and losing players it was necessary to make some assumptions:

 Assumption1: good players more often than not get 'their chips in ahead', i.e. their hands will (on average) have an 'Expected Equity' of greater than 50%. Also, poor players more often than not get 'their chips in bad', i.e. their hands will (on average) have an 'Expected Equity' of less than 50%.

Assumption2: a good player's hand will more often than not 'dominate' their opponent's hand than vice-versa in pre-flop all-ins, i.e. their hands will have an 'Expected Equity' of between 68% and 83% more often than they will have an 'Expected Equity' of between 17% and 32%.

Over a large sample if our alternate hypothesis and assumptions were correct:

1) Hands with an 'Expected Equity' of greater than 50% would have an 'Actual Equity' which is less than their 'Expected Equity'. Also, hands with an 'Expected Equity' of less than 50% would have an 'Actual Equity' which is greater than their 'Expected Equity'.

2) Hands with an 'Expected Equity' of 68-83% would have an 'Actual Equity' which is less than their 'Expected Equity'. Also, hands with an 'Expected Equity' of 17-32% would have an 'Actual Equity' which is greater than their 'Expected Equity'.

Top of Page ♠

Dataset

For this test we used over 1.42 million hands from 15c/30c, short handed (5max) cash game tables played at the Ongame Network. The hand histories were purchased from HandHQ* (many thanks to David for excellent customer service) and should consist of almost all the hands played at these tables between 6 June 2011 and 23 June 2020.

Cash tables were used because it was difficult to obtain SnG hand histories for Ongame. Cash table hands were not ideal for this test as they yield a smaller amount of pre-flop all-ins than SnG tournaments. However, since the number of hands purchased was so big the number of pre-flop all-ins was large enough to carry out a rigorous analysis.

15c/30c blinds were chosen because the hand histories were more reasonably priced than higher stakes hand histories and because these blind levels are almost exclusive to the Ongame Network.

Top of Page ♠

Method

The hand histories were imported into Poker Tracker* and we used the custom report feature to filter and sort the data. Many thanks to the excellent support team at Poker Tracker for help and advice when I was putting the custom report together.

The data was filtered to remove all hands that were not 'heads up, preflop all-ins'. The remaining hands were filtered by 'all-in call', i.e. the hands would be viewed from the perspective of of the player that called the 'all-in' bet and all duplicate hands from other players' perspectives were removed.

The outputs from the custom report were set as:

Hand I.D.: the unique reference number that Ongame gives each hand. These were counted to produce the 'Total Number of Hands'.

Date: the date the hand was played.

Player: the screen name of the player that called the 'all-in' bet.

Hole Cards: the pocket cards of the 'all-in caller'.

Expected All-in Equity: this is expressed as a percentage, i.e. it is the probability of the caller winning the hand (p) multiplied by 100. This value is calculated by Poker Tracker using a 'Monte Carlo' method and therefore there are slight errors associated with each figure, for more details see the limitations and discussion section.

 Winner: The screen name of the player that won the hand.

Actual Result: if the hand was won a value of 1 was produced and if the hand was lost a value of 0 was produced, by default split pots were recorded as won and therefore received a value of 1. These were summed to give the 'Total Number of Hands Won (including split pots)'.

Split Pots: if a hand resulted in a split pot a value of 1 was produced. These were summed to give the 'Total Number of Split Pots'.

The outputs were set to be ordered by 'Expected All-in Equity' so that the hands with the lowest expected all-in equity would appear at the top of the list running down to the hands with the highest expected all-in equity.

If you want to run this analysis on your own cash table hand histories you can download the custom report by clicking on the icon below and then importing into Poker Tracker* as normal:

Download Here:

Top of Page ♠

Results

Upon import it was found there were errors when importing 4227 hand histories - there were no duplicate hands. These had a negligible effect on sample size which was 1,422,673 hands.

The report was run and 12,306 'heads-up, pre-flop all-ins' were produced.

These hands were exported to 5 spreadsheets for analysis. The first was left unchanged and comprised all 12,306 hands output from the report. The rest were divided into hands that were 'ahead' preflop, 'behind' preflop,  'dominating' and 'dominated' hands:

Number of 'ahead' hands (>50% expected all-in equity) = 6,636

Number of 'behind' hands (<50% expected all-in equity) = 5,660

Number of 'dominating' hands (68%<expected all-in equity<83%) = 3,049

Number of 'dominated' hands (17%<expected all-in equity<32%) =  2,337

To view the full results in PDF format click on an icon below:

ALL hands

AHEAD hands

BEHIND hands

DOMINATING hands

DOMINATED hands

It is worth noting that the number of 'ahead' hands is significantly greater than the number of 'behind' hands and also that the number of 'dominating' hands is greater than 'dominated' hands. This can be explained by the facts that:

(a) we originally filtered our hands by 'all-in' caller, (i.e. the hand is viewed from the perspective of the player that called the 'all-in') and

(b) that a significant proportion of players use the 'gap concept' when making 'all-in' decisions. Put simply, the 'gap concept' is the idea that it requires a stronger hand to call an 'all-in' bet than it does to be the initial bettor because the initial bettor has some 'folding equity' while the caller has none.

When viewing the spreadsheets be aware of the formatting of hole cards. Each hole card has a number which represent the card: 2c = 1, 3c = 2, 4c= 3... Ac = 13, 2d = 14, 3d = 15... Ad = 26, 2h = 27... Ah = 39, 2s = 40... As = 52).

Top of Page ♠

Analysis

On each spreadsheet another column was added:

p(1-p) where p is the probability of the caller winning the hand. This value was calculated from the all-in equity and was summed to give:∑[p(1-p)].

The following outputs were used in calculations:

Total number of hands, n

Number of hands won (incl. split pots), w

Number of split pots, s

The following calculations were carried out in order to compare the actual number of hands won to the expected number of hands won:

The mean expected equity, x (%) was calculated by summing the value of 'expected all-in equity' for every hand and dividing the total by the number of hands, n.

Actual (effective) number of hands won, z = w - (s/2). It was necessary to adjust the number of hands won to take into consideration the number of split pots. Since all the hands were 'heads-up' split-pots were considered to have an 'actual equity' of 0.5 when compared with a value of 1 for a hand that was won and 0 for a hand that was lost. The number of hands won already contained a value of 1 for every split pot and therefore the 'effective' number of hands won was calculated using the formula shown. 

Expected number of hands won, e = xn/100 was calculated in order that this could then be compared to the actual number of hands won.

Actual deviation = z-e. The deviation of the actual number of hands won from the expected number was calculated by simply subtracting one from the other.

Standard Deviation =  √∑[p(1-p)]. To see if the actual deviation from the expected results was within reasonable limits the standard deviation of the population was calculated. In order to achieve this it was assumed that the population behaved as a binomial distribution. In reality the population is an imperfect binomial distribution since the probability of success, p, varied for each hand. In a perfect binomial distribution the "probability of success of each event, p must be the same for each trial". For more on this see this discussion.

Actual mean equity, a = (z/n)100 was also calculated so that it could be compared to the expected mean equity.

To view the full results in PDF format click on an icon below:

ALL hands

AHEAD hands

BEHIND hands

DOMINATING hands

DOMINATED hands

Top of Page ♠

Results of 'Bad Beat' Analysis

Below is a table of the results for each of the data groups:

Data Group Equity Range (%) Total No. Hands Expected No. Hands Won Actual No. Hands Won Actual Deviation Standard Deviation Mean Expected Equity Actual Mean Equity
All Hands 0-100 12306 6457 6454.5 -2.5 50 52.47 52.45
Ahead 50-100 6636 4557 4562 +5 37 68.67 68.75
Behind 0-50 5660 1895 1887.5 -7.5 34 33.48 33.35
Dominating 68-83 3049 2312 2311.5 -0.5 23 75.83 75.81
Dominated 17-32 2337 573 560.5 -12.5 21 24.53 23.98

From these results we can see that the actual number of hands won is extremely close to the expected number of hands won for every data group. In all cases the actual deviation is well within 1 standard deviation and in all but 1 case within 0.25 of a standard deviation and there are no discrepancies between the 'ahead' and 'behind' samples or the 'dominating' and 'dominated' samples.

 Also, since hands were filtered by 'all-in' caller, the 'ALL HANDS' data group has tested for biases between players that originally made an all-in bet and players that 'called' an all-in bet. The results show no bias in this respect.

Top of Page ♠

Conclusion

If we return to our hypotheses: The evidence shows that our alternate hypothesis is incorrect for this sample of hands, i.e. the community cards are not biased to favour losing players in all-in situations. Also, it is a fair assumption that this sample of hands is representitive of the general population of hands played at 15c/30c, short handed cash tables at the Ongame Network at the present time.

We can therefore conclude that for 15c/30c, short handed, cash tables our null hypothesis is correct and that Ongame is fair (with respect to 'bad beats') at the present time.

Top of Page ♠

Limitations & Discussion

This test was performed on a specific game type (15c/30c short handed tables) during a specific period (6-23 June 2020) at a specific poker network (Ongame) and the results can be considered true for these conditions only. Although these results are relevant to online poker in general other circumstances were not tested. Other poker sites may use different methods for the distribution of cards and other game types (e.g. multi-table tournament hold'em) or levels (e.g. $2/$4) at Ongame could also use different programs for the deal. It is also true that the method of dealing at a given site could change in the future as the site updates. Online Poker Watchdog intends to perform this test on other poker sites for a variety of games and levels.

Also, this analysis only tests one aspect of potential rigging, i.e. the 'Bad Beat' theory. In theory there are many other ways that a poker site could be rigged that this test doesn't examine, for example it doesn't test the distribution of hole cards between players in any way. Online Poker Watchdog intends to perform further analyses, designed to test other theories of potential rigging of online poker.

One area of potential debate in this analysis is the assumptions about 'winning' and 'losing' players, i.e.: 1) good players more often than not get 'their chips in ahead' and 2) a good player's hand will more often than not 'dominate' their opponent's hand than vice-versa in pre-flop all-ins. Since we have filtered hands on 'all-in caller' we are effectively comparing 'all-in' calls, for example: when we compare the data-groups 'ahead' and 'behind' we are comparing 'good calls' with 'bad calls'. Therefore, we are effectively comparing players that are 'good callers' with players that are 'bad callers'.

 In this regard the main point to be aware of is that this is one small aspect that makes a winning cash game player. There are other important skills that make a 'winning' cash player, such as: post-flop play, the ability to bluff at the right time and adjusting to player types. However, making 'good all-in calls' pre-flop is a factor that contributes to being a winning player, although it is definitely not the biggest factor in cash game play at the 15c/30c short handed tables.

Another area that has raised questions is the completeness and legitimacy of the original dataset. Hand histories were purchased from an independent 3rd party and although it is never possible to be 100% sure of the reliability of a 3rd party source there is very little reason to believe that the hand histories are anything but legitimate. The dataset is not complete.

Questions have been raised as to the effect of missing hand histories on the analysis. Obviously, if the hands were missed randomly then we simply have a slightly smaller sample size and this wouldn't change our results at all. However, it is possible that the missed hands were due to a software glitch when they were 'data-mined' and that there is a pattern behind why they were missed. For this to adversely effect the analysis the missed hands would have to share some characteristic which meant that their actual equities and expected equities were related differently from the rest of the sample. This is highly unlikely.

Top of Page ♠