P-Value Calculator (Snapshot)
Calculator Configuration
Recommended: For a more comprehensive tool that visualizes how p-values and confidence intervals evolve over time with daily data input, check out our P-Value and Confidence Interval Over Time Calculator.
How it works
Watch a demo of the P-Value Calculator ⤴
What is this?
This tool performs a simple statistical test to compare conversion rates between two groups (typically a control and a variant) using only their final totals (i.e. one-time snapshot, not daily data).
Why use it?
Sometimes you just want to quickly check if a result is statistically significant based on the totals collected so far, without needing a full time series or experiment history. This calculator lets you do that instantly.
It's useful for:
- Quick sanity checks
- End-of-test analysis
- Spot-checking raw data from platforms like Optimizely or GA4
How it works
You enter:
- Number of conversions and total visitors for the control group
- Number of conversions and total visitors for the variant group
The calculator then:
- Computes conversion rates for each group
- Calculates the absolute and relative lift
- Performs a two-proportion z-test to generate:
- A z-statistic
- A p-value indicating whether the difference is statistically significant
- Estimates post-test power based on the observed lift and sample size
Why Post-Test Power?
If your test didn't reach statistical significance, post-test power helps you answer:
"Was this result inconclusive because there's no effect, or because the test was underpowered?"
The power estimate tells you how likely your experiment was to detect the observed lift, assuming it's real.
Formula used for P-Value
Where:
- and are the observed conversion rates
- is the pooled rate
- and are the total sample sizes
Formula used for Post-Test Power
We treat the observed absolute difference as the effect size, and calculate power using the same standard error as the p-value calculation:
Where is the standard error (same as used in the p-value calculation), and is the observed difference.
Then calculate power as:
Where:
- is the standard normal CDF
- is the z-score for your significance level (e.g. 1.96 for 95% confidence, two-tailed)
- is the z-score for the observed effect, calculated using the standard error that accounts for both sample sizes
Use this when you want a fast answer to:
- "Is this statistically significant?"
- "Was this test big enough to detect the effect I saw?"
Need help implementing experiments?
Turn insights from this simulator or calculator into real results. The Delivering Growth Community (free to join) helps PMs, engineers, and founders build experimentation systems that drive conversion, activation, and retention. You'll learn to do this without bloated tooling or siloed teams.
- ✅ Guidance on A/B testing infrastructure and reliable experiments
- ✅ Code templates and patterns from top Growth teams
- ✅ Community of growth practitioners sharing wins and strategies