A/B Test Calculator
Calculate the statistical significance of your A/B test results. Enter your sample sizes and conversions to determine if your test has a clear winner.
Control (A)
Variant (B)
How to Use
- Enter the number of visitors and conversions for your Control (original) version.
- Enter the same for your Variant (the version you’re testing).
- Click “Calculate Significance” to see if the difference is statistically significant.
A result is typically considered statistically significant when the confidence level is 95% or higher (p-value < 0.05).
Frequently Asked Questions
What does statistical significance mean?
Statistical significance means there is a less than 5% probability that the observed difference between variants is due to random chance. A p-value below 0.05 indicates significance at the 95% confidence level.
How many visitors do I need for a valid A/B test?
It depends on your baseline conversion rate and the minimum detectable effect. Generally, you need at least 1,000 visitors per variant for meaningful results, but complex tests may require 10,000+.
What is a p-value?
The p-value represents the probability of observing the given results (or more extreme) if there were actually no difference between variants. Lower p-values indicate stronger evidence of a real difference.
What is relative uplift?
Relative uplift measures the percentage improvement of the variant over the control. For example, if the control converts at 3% and the variant at 3.6%, the relative uplift is 20%.