It's Not Techy
All terms
CROUpdated Apr 2026

A/B Test

A controlled experiment comparing two versions of a page, email, or feature.

Definition

An A/B test (also called split test) is a controlled experiment comparing two versions of an asset — a page, an email, an ad, a feature — to measure which one produces a statistically-significant improvement in a defined metric. Traffic is randomly split between the control and the variant.

Context

A/B tests require a pre-registered hypothesis, a single isolated variable, and enough sample size to detect the expected effect at chosen statistical confidence. Running a test without these is creative rotation, not learning.

Most A/B tests never reach statistical significance because sample size is too small. The honest approach is calling directional results 'directional' rather than claiming wins; most business decisions can be made on directional data with transparency about uncertainty.

Example

Detecting a 10% lift on a 2% baseline conversion rate at 95% confidence requires approximately 18,000 conversions total (9,000 per variant). At a $50 AOV and 2% conversion, that's $45M in tested revenue — most accounts don't have the volume.

The nuance most definitions miss

Peeking at results before sample size is reached inflates false-positive rates. The discipline is pre-committing to a sample size and running the full test; 'we'll stop when it's significant' produces wrong conclusions roughly 1 in 3 times.

Related terms

Services that apply this

Let's build something that ranks, converts, and lasts.

Book a free 30-minute strategy call — no pitch, just a real conversation about what moves the needle for your business.