Quick Results

Full Reports



contact:

info@
onlinepokerwatchdog
.com

YOUR
ADVERT
HERE














































































































































































































































































Merge Network Bad Beats? Full report

One common theory of how online poker could be rigged is the 'Bad Beat Theory'. In 'bad beat theory' good players receive more 'bad beats' than they should and poor players 'get lucky' too often when all the chips go in.

It has been argued that a site rigged in this way could increase its profit because the 'fish' would lose their money less quickly, the 'sharks' would win more slowly (and cash-out less often) and more hands would be played.

 In this experiment we put the Merge Network to the test to see if good players receive more 'bad beats' than they should...

Hypotheses

Null Hypothesis: The Merge Network is fair. The cards are dealt randomly.

Alternate Hypothesis: Merge is rigged. The community cards are biased to favour losing players in 'all-in' situations.

Top of Page ♠

Explanation

In this test we anlaysed the results of 'pre-flop all-in' hands and compared the 'Expected number of hands won' with the 'Actual Number of hands won' over a large sample size.

If our null hypothesis was true (i.e. The Merge Network is fair): the 'Expected number of hands won' would have been very close to the 'Actual number of hands won'.

Since we wished to compare results for winning and losing players it was necessary to make some assumptions:

 Assumption1: good players more often than not get 'their chips in ahead', i.e. their hands will (on average) have an 'Expected Equity' of greater than 50%. Also, poor players more often than not get 'their chips in bad', i.e. their hands will (on average) have an 'Expected Equity' of less than 50%.

Assumption2: a good player's hand will more often than not 'dominate' their opponent's hand than vice-versa in pre-flop all-ins, i.e. their hands will have an 'Expected Equity' of between 68% and 83% more often than they will have an 'Expected Equity' of between 17% and 32%. For more information on 'dominated hands' click here.

Over a large sample if our alternate hypothesis and assumptions were correct:

1) Hands with an 'Expected Equity' of greater than 50% would have an 'Actual Equity' which is less than their 'Expected Equity'. Also, hands with an 'Expected Equity' of less than 50% would have an 'Actual Equity' which is greater than their 'Expected Equity'.

2) Hands with an 'Expected Equity' of 68-83% would have an 'Actual Equity' which is less than their 'Expected Equity'. Also, hands with an 'Expected Equity' of 17-32% would have an 'Actual Equity' which is greater than their 'Expected Equity'.

Top of Page ♠

Dataset

For this test we used over 1 million hands from 25c/50c, heads-up (2max) cash game tables played on the Merge Network. The hand histories were purchased from Poker Table Ratings* and should consist of almost all the hands played at these tables between 4th May 2011 and 1st June 2011.

2 player cash table hands were used because we have not tested these tables yet - in our other tests have analysed 6 max and full ring cash games and 9 max sit and go tournaments. We found that there were very few pre-flop all-ins at these tables - probably due to the nature of heads-up play. This meant that the sample sizes were small, especially for dominated and dominating hands. However, it was still possible to get some decent sample sizes for ahead and behind hands.

25c/50c blinds were chosen because the hand histories were more reasonably priced than higher stakes hand histories.

Top of Page ♠

Method

The hand histories were imported into Poker Tracker and we used the custom report feature to filter and sort the data.

The data was filtered to remove all hands that were not 'preflop all-ins'. The remaining hands were filtered by 'all-in call', i.e. the hands would be viewed from the perspective of of the player that called the 'all-in' bet and all duplicate hands from other players' perspectives were removed.

The outputs from the custom report were set as:

Hand I.D.: the unique reference number that the Merge Network gives each hand. These were counted to produce the 'Total Number of Hands'.

Date: the date the hand was played.

Player: the screen name of the player that called the 'all-in' bet.

Hole Cards: the pocket cards of the 'all-in caller'.

Expected All-in Equity: this is expressed as a percentage, i.e. it is the probability of the caller winning the hand (p) multiplied by 100. This value is calculated by Poker Tracker using a 'Monte Carlo' method and therefore there are slight errors associated with each figure, for more details see the limitations and discussion section.

 Winner: The screen name of the player that won the hand.

Actual Result: if the hand was won value of 1 was produced and if the hand was lost a value of 0 was produced, by default split pots were recorded as won and therefore received a value of 1. These were summed to give the 'Total Number of Hands Won (including split pots)'.

Split Pots: if a hand resulted in a split pot a value of 1 was produced. These were summed to give the 'Total Number of Split Pots'.

The outputs were set to be ordered by 'Expected All-in Equity' so that the hands with the lowest expected all-in equity would appear at the top of the list running down to the hands with the highest expected all-in equity.

If you want to run this analysis on your own cash table hand histories you can download the custom report by clicking on the icon below and then importing into Poker Tracker as normal:

Download Here:

Top of Page ♠

Results

Upon import it was found there were no errors when importing the hand histories - there were also only 2 duplicate hands.

The report was run and 3,824 'pre-flop all-ins' were produced. 2 of these hands were missing the output for 'Expected All-in Equity' (probably due to a 'bug' in Poker Tracker's equity calculating program) so the total number of usable 'All-in' hands produced from the original sample of 1 million hands was 3,822.

These hands were exported to 5 spreadsheets for analysis. The first was left unchanged and comprised all 3,822 hands output from the report. The rest were divided into hands that were 'ahead' preflop, 'behind' preflop,  'dominating' and 'dominated' hands:

Number of 'ahead' hands (>50% expected all-in equity) = 2,165

Number of 'behind' hands (<50% expected all-in equity) = 1,652

Number of 'dominating' hands (68%<expected all-in equity<83%) = 1,031

Number of 'dominated' hands (17%<expected all-in equity<32%) =  688

To view the full results in PDF format click on an icon below:

ALL hands

AHEAD hands

BEHIND hands

DOMINATING hands

DOMINATED hands

It is worth noting that the number of 'ahead' hands is significantly greater than the number of 'behind' hands and also that the number of 'dominating' hands is greater than 'dominated' hands. This can be explained by the facts that:

(a) we originally filtered our hands by 'all-in' caller, (i.e. the hand is viewed from the perspective of the player that called the 'all-in') and

(b) that a significant proportion of players use the 'gap concept' when making 'all-in' decisions. Put simply, the 'gap concept' is the idea that it requires a stronger hand to call an 'all-in' bet than it does to be the initial bettor because the initial bettor has some 'folding equity' while the caller has none.

When viewing the spreadsheets be aware of the formatting of hole cards. Each hole card has a number which represent the card: 2c = 1, 3c = 2, 4c= 3... Ac = 13, 2d = 14, 3d = 15... Ad = 26, 2h = 27... Ah = 39, 2s = 40... As = 52).

Top of Page ♠

Analysis

On each spreadsheet another column was added:

p(1-p) where p is the probability of the caller winning the hand. This value was calculated from the all-in equity and was summed to give:∑[p(1-p)].

The following outputs were used in calculations:

Total number of hands, n

Number of hands won (incl. split pots), w

Number of split pots, s

The following calculations were carried out in order to compare the actual number of hands won to the expected number of hands won:

The mean expected equity, x (%) was calculated by summing the value of 'expected all-in equity' for every hand and dividing the total by the number of hands, n.

Actual (effective) number of hands won, z = w - (s/2). It was necessary to adjust the number of hands won to take into consideration the number of split pots. Since all the hands were 'heads-up' split-pots were considered to have an 'actual equity' of 0.5 when compared with a value of 1 for a hand that was won and 0 for a hand that was lost. The number of hands won already contained a value of 1 for every split pot and therefore the 'effective' number of hands won was calculated using the formula shown. 

Expected number of hands won, e = xn/100 was calculated in order that this could then be compared to the actual number of hands won.

Actual deviation = z-e. The deviation of the actual number of hands won from the expected number was calculated by simply subtracting one from the other.

Standard Deviation =  √∑[p(1-p)]. To see if the actual deviation from the expected results was within reasonable limits the standard deviation of the population was calculated. In order to achieve this it was assumed that the population behaved as a binomial distribution. In reality the population is an imperfect binomial distribution since the probability of success, p, varied for each hand. In a perfect binomial distribution the "probability of success of each event, p must be the same for each trial". For more on this see this discussion.

Actual mean equity, a = (z/n)100 was also calculated so that it could be compared to the expected mean equity.

To view the full results in PDF format click on an icon below:

ALL hands

AHEAD hands

BEHIND hands

DOMINATING hands

DOMINATED hands

Top of Page ♠

Results of 'Bad Beat' Analysis

Below is a table of the results for each of the data groups:

Data Group Equity Range (%) Total No. Hands Expected No. Hands Won Actual No. Hands Won Actual Deviation Standard Deviation
All Hands 0-100 3822 2035 2014 -21 28
Ahead 50-100 2165 1472 1468 -4 21
Behind 0-50 1652 561 543.5 -17.5 19
Dominating 68-83 1031 775 778.5 +3.5 14
Dominated 17-32 688 169.5 170.5 +1 11

From these results we can see that the actual number of hands won is very close to the expected number of hands won for every data group. In all cases the actual deviation is well within 2 standard deviations and there are no discrepancies between the 'ahead' and 'behind' samples or the 'dominating' and 'dominated' samples.

 Also, since hands were filtered by 'all-in' caller, the 'ALL HANDS' data group has tested for biases between players that originally made an all-in bet and players that 'called' an all-in bet. The results show no bias in this respect.

Top of Page ♠

Conclusion

If we return to our hypotheses: The evidence shows that our alternate hypothesis is incorrect for this sample of hands, i.e. the community cards are not biased to favour losing players in all-in situations. Also, it is a fair assumption that this sample of hands is representative of the general population of hands played at 25c/50c, heads-up cash tables at the Merge Network at the present time.

We can therefore conclude that for 25c/50c, 2-player, cash tables our null hypothesis is correct and that The Merge Network is fair (with respect to 'bad beats') at the present time.

Top of Page ♠

Limitations & Discussion

This test was performed on a specific game type (25c/50c 2max tables) during a specific period (May-July 2011) at a specific poker network (Merge) and the results can be considered true for these conditions only. Although these results are relevant to online poker in general other circumstances were not tested. Other poker sites may use different methods for the distribution of cards and other game types (e.g. multi-table tournament hold'em) or levels (e.g. $2/$4) at Merge could also use different programs for the deal. It is also true that the method of dealing at a given site could change in the future as the site updates. Online Poker Watchdog intends to perform this test on other poker sites for a variety of games and levels.

Also, this analysis only tests one aspect of potential rigging, i.e. the 'Bad Beat' theory. In theory there are many other ways that a poker site could be rigged that this test doesn't examine, for example it doesn't test the distribution of hole cards between players in any way. Online Poker Watchdog intends to perform further analyses, designed to test other theories of potential rigging of online poker.

One area of potential debate in this analysis is the assumptions about 'winning' and 'losing' players, i.e.: 1) good players more often than not get 'their chips in ahead' and 2) a good player's hand will more often than not 'dominate' their opponent's hand than vice-versa in pre-flop all-ins. Since we have filtered hands on 'all-in caller' we are effectively comparing 'all-in' calls, for example: when we compare the data-groups 'ahead' and 'behind' we are comparing 'good calls' with 'bad calls'. Therefore, we are effectively comparing players that are 'good callers' with players that are 'bad callers'.

In this regard the main point to be aware of is that this is one small aspect that makes a winning cash game player. There are other important skills that make a 'winning' cash player, such as: post-flop play, the ability to bluff at the right time and adjusting to player types. However, making 'good all-in calls' pre-flop is a factor that contributes to being a winning player, although it is definitely not the biggest factor in cash game play at the 25c/50c heads-up tables.

Another area that has raised questions is the completeness and legitimacy of the original dataset. Hand histories were purchased from an independent 3rd party and although it is never possible to be 100% sure of the reliability of a 3rd party source there is very little reason to believe that the hand histories are anything but legitimate. The dataset is not complete.

Questions have been raised as to the effect of missing hand histories on the analysis. Obviously, if the hands were missed randomly then we simply have a slightly smaller sample size and this wouldn't change our results at all. However, it is possible that the missed hands were due to a software glitch when they were 'data-mined' and that there is a pattern behind why they were missed. For this to adversely effect the analysis the missed hands would have to share some characteristic which meant that their actual equities and expected equities were related differently from the rest of the sample. This is highly unlikely.

Top of Page ♠