Complete Tests:

Test Results by Site:

The hand histories were imported into Poker Tracker* and we used the custom report feature to filter and sort the data.

The data was filtered to remove all hands that were not 'heads up, turn all-ins'. The remaining hands were filtered by 'all-in call', i.e. the hands would be viewed from the perspective of of the player that called the 'all-in' bet and all duplicate hands from other players' perspectives were removed.

The outputs from the custom report were set as:

Site: an identifier for the online poker site that the hand was played at.

Hand ID: the unique reference number that each poker site gives each hand. These were counted to produce the 'Total Number of Hands'.

Date: the date the hand was played.

No. Players at River: the number of players at the river was output to check that the filters were working correctly (it should always be 2 players at the turn).

Hole Cards: the pocket cards of the 'all-in caller'.

Player: the screen name of the player that called the 'all-in' bet.

Flop and Turn Cards: the 3 flop cards and the turn card were listed.

Expected All-in Equity: this is expressed as a percentage, i.e. it is the probability of the caller winning the hand (p) multiplied by 100. This value is calculated by Poker Tracker using a 'Monte Carlo' method and therefore there are slight errors associated with each figure, for more details see the limitations and discussion section.

Winner: The screen name of the player that won the hand.

Actual Result: if the hand was won value of 1 was produced and if the hand was lost a value of 0 was produced, by default split pots were recorded as won and therefore received a value of 1. These were summed to give the'Total Number of Hands Won (including split pots)'.

Split Pots: if a hand resulted in a split pot a value of 1 was produced. These were summed to give the 'Total Number of Split Pots'.

The outputs were set to be ordered by 'Expected All-in Equity' so that the hands with the lowest expected all-in equity would appear at the top of the list running down to the hands with the highest expected all-in equity.

If you want to run this analysis on your own hand histories you can download the custom report at our downloads page.

The outputs from the poker tracker report were exported to 3 excel spreadsheets where they could be analysed.

The first spreadsheet was left unchanged and comprised all hands output from the report. The second spreadsheet contained only hands that were 'ahead' at the flop (>50% expected all-in equity) and the third spreadsheet contained only hands that were behind at the flop (<50% expected all-in equity).

On each spreadsheet another column was added:

p(1-p) where p is the probability of the caller winning the hand. This value was calculated from the all-in equity and was summed to give:∑[p(1-p)], in order that the standard deviation of the sample could be calculated later.

The following outputs were obtained:

Total number of hands, n

Number of hands won (incl. split pots), w

Number of split pots, s

The following calculations were carried out in order to obtain the actual number of hands won, the expected number of hands won and the standard deviation of the sample.

The mean expected equity, x (%) was calculated by summing the value of 'expected all-in equity' for every hand and dividing the total by the number of hands, n.

Actual (effective) number of hands won, z = w - (s/2). It was necessary to adjust the number of hands won to take into consideration the number of split pots. Since all the hands were 'heads-up' split-pots were considered to have an 'actual equity' of 0.5 when compared with a value of 1 for a hand that was won and 0 for a hand that was lost. The number of hands won already contained a value of 1 for every split pot and therefore the 'effective' number of hands won was calculated using the formula shown.

Expected number of hands won, e = xn/100 was calculated in order that this could then be compared to the actual number of hands won.

Actual deviation = z-e. The deviation of the actual number of hands won from the expected number was calculated by simply subtracting one from the other.

Standard Deviation = √∑[p(1-p)]. To see if the actual deviation from the expected results was within reasonable limits the standard deviation of the population was calculated. In order to achieve this it was assumed that the population behaved as a binomial distribution. In reality the population is an imperfect binomial distribution since the probability of success, p, varied for each hand. In a perfect binomial distribution the "probability of success of each event, p must be the same for each trial". For more on this see this discussion.

The actual deviation of the 'ahead hands' was subtracted from the actual deviation of 'behind hands' to give the actual deviation of the whole sample treating 'behind hands' that improve to win the same as 'ahead hands' that lose to a bad beat.

The standard deviation of all these hands was calculated using √∑[p(1-p)].

Finally, the actual deviation was divided by the standard deviations to give the number of deviations from expectancy.

More information on these bad beat tests can be found on the Explanation, Dataset, Method, Results, Conclusions & Discussion pages.