Complete Tests:

Test Results by Site:

The hand histories were imported into Poker Tracker* and we used the custom report feature to filter and sort the data. Many thanks to 'White Rider' and 'Kraada' at Poker Tracker who provided excellent support and invaluable knowledge during this process.

The data was filtered to remove all hands that were not 'heads up, preflop all-ins'. The remaining hands were filtered by 'all-in call', i.e. the hands would be viewed from the perspective of of the player that called the 'all-in' bet and all duplicate hands from other players' perspectives were removed.

The outputs from the custom report were set as:

Hand I.D.: the unique reference number that each poker site gives each hand. These were counted to produce the 'Total Number of Hands'.

Date: the date the hand was played.

Player: the screen name of the player that called the 'all-in' bet.

Hole Cards: the pocket cards of the 'all-in caller'.

Expected All-in Equity: this is expressed as a percentage, i.e. it is the probability of the caller winning the hand (p) multiplied by 100. This value is calculated by Poker Tracker using a 'Monte Carlo' method and therefore there are slight errors associated with each figure, for more details see the limitations and discussion section.

Winner: The screen name of the player that won the hand.

Actual Result: if the hand was won value of 1 was produced and if the hand was lost a value of 0 was produced, by default split pots were recorded as won and therefore received a value of 1. These were summed to give the'Total Number of Hands Won (including split pots)'.

Split Pots: if a hand resulted in a split pot a value of 1 was produced. These were summed to give the 'Total Number of Split Pots'.

The outputs were set to be ordered by 'Expected All-in Equity' so that the hands with the lowest expected all-in equity would appear at the top of the list running down to the hands with the highest expected all-in equity.

If you want to run this analysis on your own hand histories you can download the custom report at our downloads page.

The outputs from the poker tracker report were exported to 5 excel spreadsheets where they could be analysed.

The first spreadsheet was left unchanged and comprised all hands output from the report. The other spreadsheets were divided into hands that were 'ahead' preflop (>50% expected all-in equity), 'behind' preflop (<50% expected all-in equity), 'dominating' (68%<expected all-in equity<83%) and 'dominated' hands (17%<expected all-in equity32%).

On each spreadsheet another column was added:

p(1-p) where p is the probability of the caller winning the hand. This value was calculated from the all-in equity and was summed to give:∑[p(1-p)], in order that the standard deviation of the sample could be calculated later.

The following outputs were obtained:

Total number of hands, n

Number of hands won (incl. split pots), w

Number of split pots, s

The following calculations were carried out in order to obtain the actual number of hands won, the expected number of hands won and the standard deviation of the sample.

The mean expected equity, x (%) was calculated by summing the value of 'expected all-in equity' for every hand and dividing the total by the number of hands, n.

Actual (effective) number of hands won, z = w - (s/2). It was necessary to adjust the number of hands won to take into consideration the number of split pots. Since all the hands were 'heads-up' split-pots were considered to have an 'actual equity' of 0.5 when compared with a value of 1 for a hand that was won and 0 for a hand that was lost. The number of hands won already contained a value of 1 for every split pot and therefore the 'effective' number of hands won was calculated using the formula shown.

Expected number of hands won, e = xn/100 was calculated in order that this could then be compared to the actual number of hands won.

Actual deviation = z-e. The deviation of the actual number of hands won from the expected number was calculated by simply subtracting one from the other.

Standard Deviation = √∑[p(1-p)]. To see if the actual deviation from the expected results was within reasonable limits the standard deviation of the population was calculated. In order to achieve this it was assumed that the population behaved as a binomial distribution. In reality the population is an imperfect binomial distribution since the probability of success, p, varied for each hand. In a perfect binomial distribution the "probability of success of each event, p must be the same for each trial". For more on this see this discussion.

More information on these bad beat tests can be found on the Explanation, Dataset, Method, Results, Conclusions & Discussion pages.