Why This Strategy Works
Most brands run multiple campaigns targeting different audiences. But a product that performs well in one campaign might underperform in another. This creates inefficiencies and leads to wasted ad spend.
BigAtom helps you measure product-level performance within each campaign and automatically removes poor performers, allowing you to scale smarter and more profitably — without manual effort.
Common Results from This Strategy
✅ 10–20% improvement in campaign efficiency
✅ 20–50% increase in scale and spends with better ROAS
These outcomes are consistently observed across brands that have implemented this approach. However, in somecases the performance can vary significantly due to certain factors.
🔧 Setting Up This Strategy
This strategy works for both new and existing campaigns. The setup differs slightly in each case:
🟢 New Campaign Setup
Use this if you’re starting a fresh Advantage+ campaign to test this method.
Step 1: Create a Product Set
📦 What to do:
Create a product set of ~500 products from a broad category or catalog.
Ensure this product set is not used in any other campaign.
🚫 Avoid:
Product sets with <200 items — they limit the impact.
Overly filtered or niche sets.
Step 2: Campaign Setup
🎯 What to do:
Launch an Advantage+ campaign targeting a broad audience.
Keep this ad standalone (don’t add multiple ads in the same campaign using similar sets).
Set the campaign budget to target at least 50 conversions within 7 days, which is typically required for the campaign to exit the learning phase.
For example, if your account-level Cost Per Sale (CPS) is ₹500, a daily budget of around ₹3,500 would help achieve ~50 sales over a week.
🚫 Avoid:
Using this product set in retargeting campaigns.
Mixing it with other dynamic product ads.
Step 3: Add Conditions to Product Set
🧠 What to do:
After running the campaign for at least 7 days with sufficient budget as mentioned above or once the campaign has had come out of learning , set up logic in BigAtom to auto-remove poor performers:
Go to Campaign Filter
➤ Filter the campaign name or ID where this set is being used.
2. Add Metric-Based Conditions:
Group 1:
Meta Spend > 3x CPS
(e.g., If CPS = ₹300 → Spend > ₹900)Meta ROAS < 80% of campaign average
(e.g., If ROAS = 3x → ROAS condition = 2.5x)
Group 2: (Or Condition)
Meta Spend < 3x CPS
(Exclude under-spent products from being filtered out too early)
3. Set Date Range = Last 30 Days
4. Check Result:
Ensure products removed account for <15% of campaign spends.
If too high, reduce the ROAS threshold slightly (e.g., from 2.5x to 2.3x).
This condition removes products that spent enough but didn’t deliver results, while keeping others active.
🔄 Existing Campaign Setup
Use this if you already have a running campaign and want to integrate this strategy.
Step 1: Migrate Existing Product Set
📦 What to do:
Migrate the existing product set from Meta to BigAtom.
Make sure the set has ~500 products or more.
Confirm that the product set is not used in multiple ads within the same campaign.
Step 2: Add Filters Based on Campaign History
📊 What to do:
Pull the campaign name/ID into BigAtom’s filter.
Use historical performance to set benchmarks.
Add the same logic as above using Meta Spend & ROAS filters.
🛠 You don’t need to wait for fresh data — BigAtom can use the last 30 days’ data instantly.
What the Logic Does
This logic dynamically:
🚫 Removes poor performers that spent >3x CPS and still underdelivered
✅ Keeps good performers and those that haven’t spent enough yet
🎯 Maintains campaign scale by filtering only those dragging ROAS down
No manual work. No need to pause campaigns. Just smarter scaling.
This filter does not permanently exclude products from your campaign or product set.
If a product starts performing better again and meets the ROAS + Spend logic conditions, it will be automatically added back into the active campaign delivery.
This makes the strategy flexible and adaptive — it only pauses poor performers temporarily until they improve.
🧪 Summary: Why This Strategy Wins
Aspect | Without BigAtom | With BigAtom Logic |
Product Performance | Evaluated manually or across all campaigns | Evaluated within each campaign |
Optimization Effort | Manual pauses/adjustments | Auto-filtering of poor performers |
Scaling Effortlessly | Risk of wasted spend | Spend only on winners |