A/B Test Simulator
Experiment Configuration
Enter as percentage points (e.g., 0.8 for 0.8% absolute increase)
How it works
Watch a demo of the A/B Test Simulator ⤴
This A/B test simulator helps you understand how statistical significance works in real-world experiments. It simulates user behavior based on your configured conversion rates, showing you how p-values, confidence intervals, and significance evolve as your test collects data over time.
Unlike real-world tests where you wait weeks or months to see results, you can instantly see how your experiment would perform with different conversion rates, sample sizes, and statistical parameters. This helps you understand why some tests require large sample sizes, how random variation affects results, and what to expect when running A/B tests.
After running a simulation, you'll see detailed statistical analysis including p-values over time, confidence intervals, conversion rate trends, and whether your test reached statistical significance based on your chosen alpha level.
What to Try
Two conversion rates that are the same
Set both control and variant rates to the same value (e.g., 2.2% for both). Run multiple simulations to see if the test ever appears significant due to random chance alone. This demonstrates Type I errors and why statistical significance doesn't guarantee a real difference.
Suggested settings: Control Rate = 0.022, Variant Rate = 0.022, Sample Size = 100,000
Two conversion rates that are very close
Set rates that are slightly different (e.g., 2.2% vs 2.3%). See how long it takes to reach significance, if at all. This shows why detecting small improvements requires large sample sizes and longer test durations.
Suggested settings: Control Rate = 0.022, Variant Rate = 0.023, Sample Size = 100,000
Two conversion rates that are very far apart
Use rates with a large difference (e.g., 2% vs 5%). Observe how quickly significance is reached. This demonstrates how larger effect sizes lead to faster, more reliable results.
Suggested settings: Control Rate = 0.020, Variant Rate = 0.050, Sample Size = 50,000
Small number of users / samples
Try a small sample size (e.g., 1,000 or 5,000 users) with different conversion rates. Notice how even with real differences, small samples often fail to reach significance. This illustrates the importance of adequate sample sizes for reliable results.
Suggested settings: Control Rate = 0.022, Variant Rate = 0.030, Sample Size = 5,000
Run each test multiple times
Run the same configuration multiple times and observe how results vary due to random chance. The same test can show different p-values, different significance outcomes, and different final conversion rates. This highlights the importance of running tests long enough and understanding statistical variability.
Try: Run any configuration 5-10 times and compare the results
Need help implementing experiments?
Turn insights from this simulator or calculator into real results. The Delivering Growth Community (free to join) helps PMs, engineers, and founders build experimentation systems that drive conversion, activation, and retention. You'll learn to do this without bloated tooling or siloed teams.
- ✅ Guidance on A/B testing infrastructure and reliable experiments
- ✅ Code templates and patterns from top Growth teams
- ✅ Community of growth practitioners sharing wins and strategies