Overview
Product Stop Loss is a smart optimization feature in BigAtom that automatically excludes non-performing products from your live catalog.
By doing this, Meta and Google can reallocate the saved budget to:
Products already performing well, or
Products that have not yet received enough exposure
This process helps your ad delivery systems focus on high-potential products, improving catalog efficiency and reducing wasted ad spend over time.
Why Product Stop Loss Matters
Running ads for large product catalogs often leads to uneven distribution — a small subset of products gets most of the exposure while others get little to none.
Product Stop Loss helps balance this by:
Eliminating irrelevant spend on consistently poor performers
Freeing up budgets for products with better potential
Improving efficiency of catalog campaigns gradually over time
For brands with extensive catalogs, this becomes a recurring optimization exercise — as product performance keeps shifting based on exposure, seasonality, and Meta or Google’s dynamic spend allocation logic.
Before You Implement Stop Loss
If you’re not fully confident about how Stop Loss works or want to see its impact before applying it across your entire account, we recommend starting with a controlled experiment first.
Running an experiment helps you:
Understand how Stop Loss affects product visibility and ad efficiency
Build confidence in the feature’s logic before making it part of your regular catalog optimization workflow
Identify the right exclusion percentage and performance benchmarks for your brand
The following are two structured experiment frameworks you can use to assess the feature’s impact safely and accurately.
1. Account-Level Stop Loss Experiment
Objective
Measure the impact of excluding bottom-performing products across the entire account on overall efficiency metrics like ROAS, CPA, and cost per sale.
Setup Steps
Define the condition:
Exclude the bottom 20% of poor-performing products that:
Have spent 1x–2x of your account-level cost per sale, and
Delivered significantly lower ROAS compared to your account average
Use a 14-day or 30-day data window (avoid shorter periods like 7 days due to volatility).
Base your 20% exclusion on spend, not product count.
Example:If your total 30-day spend is ₹100,000, then the bottom 20% (₹20,000) of spend should represent the poor-performing products to exclude.
If only 10 products account for that ₹20,000 spend (out of 500 total products), then only those 10 products should be excluded.
Do not exclude 100 products just because they make up 20% of total count.
Ensure account stability:
Do not make major changes during the experiment:
No campaign creation or pausing
No budget changes
No major edits to catalog ads
Regular (custom) ad campaigns can continue as usual.
Timing:
Run the experiment when business-as-usual (BAU) conditions apply — ideally 14 days before and after the experiment start date.
This ensures clean data without other influencing factors.
Evaluating the Impact
Perform a pre–post analysis using:
7 days before the experiment start
7 days after the Stop Loss is activated
Compare key metrics like ROAS, Cost per Sale, CTR, and total GMV.
The analysis will highlight efficiency gains achieved by reallocating spend from low performers.
2. Campaign-Level Stop Loss Experiment
Objective
Assess the effect of Stop Loss on a controlled campaign to measure results without impacting the full account.
When to Use
You cannot maintain account-level stability for 7+ days
Multiple catalog campaigns are running with frequent changes
Setup Steps
Choose the right campaign:
Must be an existing catalog campaign that’s been running for a while.
Ideally your 2nd or 3rd highest spender.
Should have:
Only one active ad set
Only one catalog ad
A unique product set that is not shared with other campaigns.
Apply the Stop Loss condition:
Exclude bottom 20% poor performers within that campaign using the same methodology as the account-level framework.
Use spend-based exclusion, not product count.
Maintain campaign stability:
Do not make any changes during the test period:
No budget changes
No pausing or duplication
No ad set or ad-level edits
The campaign budget and setup should remain identical to the previous 7 days before the experiment start.
BAU Environment:
Ensure at least 14 days of BAU before and after the experiment for accurate results.
Evaluating the Impact
Just like the account-level test, perform a pre–post analysis:
Compare 7-day data before vs. after the Stop Loss implementation.
Evaluate key outcomes such as:
Cost per Result
ROAS
Spend distribution across products
GMV uplift
Key Recommendations
✅ Keep Stop Loss experiments data-driven and consistent
✅ Ensure no overlapping changes in catalog campaigns
✅ Always analyze results in similar spend and condition windows
✅ Document results and iterate periodically — as product performance can change dynamically
Conclusion
Product Stop Loss is not a one-time optimization, but a continuous feed improvement mechanism.
By systematically excluding low performers, brands can:
Reduce ad waste
Improve overall catalog efficiency
Enable Meta and Google to focus delivery on higher-impact products
Following the above frameworks will help you assess the true incremental value of Stop Loss — and make it a recurring part of your performance optimization workflow.
