A/B Test Calculator
Stop guessing. Use statistics to determine if your new design actually performs better.
Statistical Confidence
80%
Result
Variation B is better by 30.00%
Not enough data to be sure. The difference could be due to random chance.
How to Use the A/B Test Calculator
To check if your split test results are statistically significant, enter the number of visitors and conversions for both your Control (Version A) and Variant (Version B). Our tool uses a standard Z-test to calculate the p-value and confidence level.
What is Statistical Significance?
Statistical significance is the probability that the difference between your test versions is not due to random chance. In conversion rate optimization (CRO), we generally aim for at least 95% confidence before declaring a winner. This means there is only a 5% chance that the results are purely coincidental.
Important A/B Testing Metrics
- Conversion Rate: The percentage of visitors who completed the desired action.
- Lift: The relative percentage increase (or decrease) in conversion rate from Version A to Version B.
- Confidence Level: How certain we are that the test result is real.
Press ⌘K
