Skip to main content

Product Exclusion - A/B Experiment Framework

Experiment to measure the impact of product exclusion feature

Updated over a month ago

Feature Focus

Product Stop Loss / Exclusion

Overview: Isolating Performance Lift

BigAtom’s Product Exclusion feature automatically identifies and removes underperforming products within a specific campaign context or at account level. This ensures that ad spend is directed only toward the SKUs most likely to convert for the selected audience — while still allowing those same products to perform in other campaigns where they are strong.

To validate the real incremental gains in ROAS and campaign efficiency, we use a controlled and measurable A/B Testing Methodology.


I. The A/B Test Methodology — Head-to-Head Comparison

The experiment compares performance between two identical campaigns — one running on an Optimized Product Feed (with Product Exclusion active) and another on a Standard Catalog Feed (with no exclusions applied).

1. Test Setup

Element

Campaign A (Treatment)

Campaign B (Control / Holdout)

Product Feed

Optimized Feed (Product Exclusion Active)

Standard Feed (No Optimization Applied)

Optimization Logic

Poor products removed only based on Group A performance

Full catalog always included

Campaign Structure

A parallel set of similar campaigns

Identical parallel set

Goal

Measure incremental efficiency and ROAS lift

Establish baseline performance


2. Campaign Setup Requirements

Control variables ensuring clean and valid data. The following must match between Group A and Group B:

  • Audience Targeting — Lookalikes, custom segments, or broad targeting must be consistent

  • Budget Allocation — Daily and total budget must be equal

  • Bidding Strategy — e.g., Lowest Cost, Value Optimization must match

  • Campaign Objective — All campaigns must use the same goal (e.g., Sales / Conversions)

  • Creative Assets — Ideally the same dynamic creative templates or similar formats

Any deviation in these variables may bias the results or invalidate the comparison.


3. Test Duration & Data Requirements

Requirement

Guideline

Rationale

Minimum Run Duration

4 Weeks (28 Days)

Allows sufficient delivery cycles + post-learning stability

Minimum Conversion Volume

100+ conversions per group

Ensures statistically reliable ROAS & CVR measurement

Daily Budget

≥ (Account Avg. CPA × 50) ÷ 7

Supports fast learning exit and consistent delivery

Product Count

Only 20%–30% of total catalog and ≥ 1,000 products

Prevents spend scattering and ensures Meta has enough variety to optimize

“During Week 1 or until the campaign achieves 50 conversions (whichever is earlier), both Group A and Group B must run without exclusions. Once Meta exits learning or crosses the 50-conversion threshold, the Product Exclusion logic will be activated in Group A only.”


II. Success Metrics — Measuring the “Goodness”

1. Primary Performance Indicator

Metric

Expected Outcome

Reasoning

Return on Ad Spend (ROAS)

ROASA > ROASB

Direct evidence of increased revenue efficiency due to product exclusion


2. Secondary Efficiency Indicators

Metric

Expected Outcome

What It Proves

Cost per Purchase (CPP / CPA)

CPPA < CPPB

More efficient customer acquisition

Conversion Rate (CVR)

CVRA > CVRB

Stronger relevance and purchase intent


Conclusion

This A/B experiment method ensures a scientifically fair evaluation of how Product Exclusion improves Meta campaign performance. It isolates the true incremental gain — proving that removing low-performing products results in:

  • Higher ROAS

  • Better budget efficiency

  • Increased likelihood of conversion

  • Reduced wasted ad spend

Did this answer your question?