Skip to main content

How to run an experiment with the Overlay Feature?

P
Written by Praveen Kumar
Updated yesterday

Overview

Overlay is a creative enhancement feature in BigAtom that allows you to add customized communication layers on top of your plain catalog product images.

With Overlay, you can dynamically display key product details such as:

  • Price

  • Discount percentage

  • Ratings

  • Promotional text or brand messages

This makes your catalog ads more visually engaging and informative — helping your brand stand out while maintaining dynamic delivery through Meta’s and Google’s ad systems.


Before You Start

Before implementing Overlays on your catalog ads, please ensure the following setup conditions are met for accurate performance measurement and clean testing:

  1. Turn off Media Enhancements

    • Go to your Meta ad level settings and turn off any automatic media enhancements across campaigns.

    • When media enhancements are enabled, Meta can automatically create and deliver alternate product image variations — which might include or exclude overlays inconsistently.

    • Disabling this ensures your Overlay experiments have clean, comparable results.

  2. Ensure Page Access

    • Applying overlays at the ad level requires Page Access for the connected ad account.

    • Make sure that the publisher or user applying overlays has the necessary Page Role/Access Permission to avoid setup failures or missing previews.


Experiment Framework for Overlay

If you’re new to Overlay or want to measure its exact impact on ad performance, it’s best to run a structured experiment first.


Below are two recommended frameworks for testing.


1. A/B Test (Recommended for Controlled Comparison)

Objective

To compare performance between catalog ads with and without overlays under identical conditions.

Setup Steps

  1. Create a new A/B test campaign in Meta Ads Manager following Meta’s standard A/B testing guidelines.

    • Do not modify an existing live campaign.

    • Create two separate ads — one with overlay applied and one without.

  2. Keep all other variables exactly the same, including:

    • Target audience

    • Budget

    • Placements

    • Optimization goals

  3. Run the test for at least 7–14 days to gather statistically significant data.

  4. Compare performance on:

    • CTR (Click-Through Rate)

    • Cost per Result (CPR)

    • ROAS (Return on Ad Spend)

Outcome

You’ll be able to quantify the incremental lift that Overlay provides in engagement and conversion efficiency, helping you decide whether to scale it across campaigns.


2. Pre–Post Analysis (For Quick Measurement)

Objective

To evaluate the performance impact of Overlay on an existing ad without creating a new campaign.

Eligibility Criteria

Before running a pre–post test, ensure the following conditions are met for accurate validation:

  • The brand should be in a BAU (Business-As-Usual) phase for at least 14 days prior to the experiment and remain stable for 14 days after overlay implementation.

  • The budget should remain consistent during the test period (same as the previous 7 days before the overlay).

  • The campaign should not have any major changes (in targeting, placements, bidding strategy, or creative mix) during the test period.

These conditions ensure that any observed performance shift is attributed to the Overlay feature and not external changes.

Setup Steps

  1. Choose a catalog ad that has been running steadily and meeting the above criteria.

  2. Apply the desired Overlay template using BigAtom.

  3. Allow the ad to run for at least 7 days post-implementation to accumulate sufficient data.

  4. Compare performance metrics such as:

    • CTR (Click-Through Rate)

    • CPC (Cost per Click)

    • Conversion Rate

    • ROAS (Return on Ad Spend)
      between the 14 days before and 14 days after overlay implementation.

Outcome

A well-controlled pre–post analysis provides a directional yet reliable view of Overlay’s performance impact under real campaign conditions — helping you determine whether to scale overlays across campaigns.


Best Practices

✅ Always disable Dynamic Media Enhancements before overlay testing
✅ Keep budgets and audiences consistent during A/B tests
✅ Avoid making creative or campaign structure changes during the test window
✅ Run tests for a minimum of 7 days to capture Meta’s learning phase accurately
✅ Document results and note any qualitative improvements in ad engagement or relevance

Did this answer your question?