What Are False Positives and How Does Figpii Deal With Them?

A brief explanation of the "multiple testing" problem When conducting a two-tailed test that compares two conversion rates of the control (P1) and the conversion rate for the variation (P2), your hypothesis would be Null hypothesis: H0: P1 = P2 Alternative hypothesis: H1: P1 ≠ P2 Your goal in conducting the AB test is to reject the null (H0) hypothesis that both rates are equal. You never accept the null hypothesis. If your test does not result in a winner, it means that you do not have enough evidence/data to reject the null hypothesis. If the null hypothesis is true, so the two rates are equal, then you do not reject H0, and your decision is correct. The same applies when your test has a winner, which means that the null hypothesis (H0) is false and you reject it correctly. However, when you reject a true null hypothesis (H0), you make a type I error (false positive). Similarly when the null hypothesis (H0) is false, but you fail to reject, then you make a type II error (false negative).

How to prevent these statistical errors when conducting an A/B test?

The probability of type II error (false negative) is denoted in statistics by beta. Increasing the sample size of your test will prevent type II error from happening. Alpha denotes the probability of type I error (false positive). You typically construct your test to keep it at a significance level of 5% to minimize the possibility of type I errors. The 5% significance level means that if you declare a winner in your test (reject the null hypothesis), then you have a 95% chance that you are correct in doing so. It also means that you have a significant result difference between the control and the variation with a 95% confidence.

Still need help? Contact Us Contact Us