Interaction Effects Calculator

Delivering Growth

Calculator Configuration

Input Data

Enter conversions and visitors for each combination of variants. Each row represents a unique combination of Control/Variant assignments across all tests.

CombinationTest 1Test 2ConversionsVisitorsRate

How it works

Watch a demo of the Interaction Effects Calculator ⤴

What are Interaction Effects?

When running multiple A/B tests concurrently, the variants from different tests can interact with each other. An interaction effect occurs when the impact of one test's variant depends on which variant the user sees in another test.

For example, a new homepage design (Test A) might work well with the baseline checkout (Test B Control), but perform poorly when combined with a new checkout flow (Test B Variant). Aggregating results without checking for interactions can hide these effects.

Chi-Square Test

The chi-square test provides a quick screening to detect whether conversion rates differ significantly across variant combinations. It answers: "Is there ANY interaction at all?"

This is a non-parametric test that doesn't require model assumptions, making it a good first pass before detailed analysis.

Logistic Regression

Logistic regression provides detailed analysis of main effects (individual test impacts) and pairwise interactions (how tests affect each other). Unlike chi-square, it tells you:

  • Direction: Whether interactions are positive or negative
  • Magnitude: The size of the effect (coefficients and odds ratios)
  • Significance: Which specific interactions are statistically significant

Interpreting Results

Main Effects: Show the impact of each test's variant when other tests are at baseline (Control). A positive coefficient means the variant increases conversion.

Interactions: Show how the effect of one test changes depending on another test's variant. A negative interaction coefficient indicates that combining variants reduces conversion.

Masked Effects: When a test shows a positive main effect but has negative interactions, the overall aggregated result can appear neutral even though the variant works well in some contexts and poorly in others.

What to Do If You Find Interactions

  • Pause rollout: If negative interactions are detected, don't immediately implement variants
  • Run longer: Extend the experiment to gather more data and confirm interaction effects
  • Investigate: Use qualitative analysis (session recordings, user feedback) to understand why interactions occur
  • Isolate tests: Consider redesigning experiments to reduce overlap if harmful interactions persist
Community

Need help implementing experiments?

Turn insights from this simulator or calculator into real results. The Delivering Growth Community (free to join) helps PMs, engineers, and founders build experimentation systems that drive conversion, activation, and retention. You'll learn to do this without bloated tooling or siloed teams.

  • ✅ Guidance on A/B testing infrastructure and reliable experiments
  • ✅ Code templates and patterns from top Growth teams
  • ✅ Community of growth practitioners sharing wins and strategies
Join for Free