What’s new: Flows by Apphud – Simplify Your Web-to-Web CampaignsLet’s see
Apphud
Why Apphud?PricingContact
Nataly
Nataly
February 25, 2025
8 min read

Use Case: Understanding Overall Proceeds and Solving Paywall Cannibalization in A/B Testing

Learn how Overall Proceeds can transform your approach to paywall experiments and prevent the costly mistake of chasing short-term wins at the expense of long-term success.

8 min read
Use Case: Understanding Overall Proceeds and Solving Paywall Cannibalization in A/B Testing

Intro: The Hidden Danger in Paywall Experiments

Imagine you run an A/B test on your app's onboarding paywall. The new variation (Group B) converts better than the control one (Group A), so you consider it a success. But weeks later, you notice something strange - your total app revenue hasn't increased. In fact, it may have even dropped.

This happens due to paywall cannibalization, where optimizing one paywall shifts purchases from another instead of generating real revenue growth. Without a metric that accounts for overall revenue impact, you could be making misleading optimization decisions. This is where Apphud’s Overall Proceeds metric comes in.


The Risk of Misleading A/B Testing Metrics

A/B testing is a powerful tool, but it can lead to false conclusions without the right approach. Many developers rely on high-level KPIs to ensure testing doesn't harm some other areas, but these don’t always reflect reality:

  • MRR or ARPU growth could be driven by audiences other than the one tested and be irrelevant to the paywall optimization.
  • Analysis of metrics for each paywall/placement doesn’t tell the full story - again, due to the different audiences; besides, it won't be clear if a higher conversion rate on one paywall comes at the expense of revenue elsewhere.
  • External factors like seasonality, traffic sources, or economic changes can distort results, making it hard to isolate the impact of an A/B test.

To assess success, you need a holistic revenue metric that considers all purchases across all placements for the tested audience.


What is Overall Proceeds and Why Does It Matter?

Overall Proceeds is a non-cohort revenue metric in Apphud Experiments analytics that provides a more complete view of an A/B test impact.

Experiments analytics, ApphudExperiments analytics, Apphud

This metrics:

  • Focuses only on the tested audiences and users marked to a certain A/B test variation, thus excluding revenue from other audiences to avoid misleading uplift calculations.
  • Includes revenue from new purchases only, i.e. subscriptions and non-renewing transactions made after the user was added to the experiment.
  • Accounts for revenue across all placements and paywalls, thus ensuring that tested paywall changes don’t hurt other revenue streams from the audience. Thus, Overall Proceeds enables deeper analysis, as it:
  • Helps identify whether users prefer to buy through the tested paywall or elsewhere.
  • Reveals if revenue uplift in one placement compensates for losses in another.
  • Allows segmentation of how much total revenue is driven by each test variation by Store Country, Product, and Placement.

With the addition of these insights, app managers can move beyond conversion rate-focused optimization and ensure revenue growth.


Use-Case Example: When a 'Winning' Paywall Was Actually a Loss

Let’s say an app runs an A/B test on its onboarding paywall for new users from country X.

Usually, they generate approximately $50K per week in proceeds from newcomers. However, only 16% of users convert from the Onboarding paywall, which offers a monthly and an annual plan. No one buys a year-long subscription right away. Newcomers from other countries convert better. Therefore, the team decided to improve conversions by introducing a weekly plan.

With each variation getting 50% of the audience, the experiment is finished, and the results from 10,000 users exposed to one of the tested paywall variations are:

The results seemed promising:

  • Group B had a higher onboarding conversion rate (+6 percentage points), leading to 1,100 users making purchases.
  • Proceeds from the onboarding paywall were 15% higher for Group B than for Group A.
  • ARPU for Group B is also better.

They think replacing the previous paywall with the one offering weekly subscriptions is a success.

However, X country newcomers’ purchases from later-stage Promo paywall placement (offering $79.99 promotional annual subscriptions on later stages for users who may have already experienced the app value) decreased significantly by 30% (500 vs 350), which translates to a drop of just 6 percentage points in conversion.

Total Proceeds and MRR at the app level did not immediately indicate a negative trend - a recent marketing campaign had brought back many old users who re-subscribed, masking the real impact of the A/B test.

Without clear metrics tracking the full impact within the tested audience, the company might have assumed the experiment was a success - but total weekly income from the newcomers in country X decreased by almost $10K from the previous average due to users opting for the cheaper, short-term weekly plan rather than later committing to a higher-value annual subscription.

Wasn't that obvious from the start?

It wasn’t. Different audiences might react differently to pricing changes, and predicting outcomes without testing is difficult.

When targeting a specific audience segment, you expect that those who convert will convert regardless of small changes, and adding an extra option should simply allow more people to convert without negatively affecting those who would have converted anyway. In theory, expanding choices should only increase revenue, not shift user behavior in unexpected ways.

However, real-world behavior isn't always rational or linear. Some users who might have been willing to commit to a higher-value subscription later instead locked into a low-cost, low-commitment option upfront. Others may anchor their expectations to the low weekly plan price they saw first and become unreceptive to higher price subscriptions, even if those offers are objectively better deals. They won't buy anything at all.

But does this always happen? Not necessarily.

  • In some cases, adding a cheaper plan might genuinely attract new paying users without cannibalizing later purchases.
  • In others, it might reduce long-term revenue from high-value purchases, even while boosting short-term conversion rates.

This is why Overall Proceeds matter. Instead of relying on assumptions about user behavior or checking a number of other analytics reports, developers need a metric that shows total revenue impact across all placements and paywalls for the tested audience.

By using Overall Proceeds, now the company would see right away that despite higher ARPU, Proceeds, and conversions in Group B, total revenue for the users group was lower due to cannibalization. This allows them to make informed decisions in the future.


Where Other A/B Testing Tools Fall Short

Many analytics platforms fail to account for total revenue shifts, leading to flawed insights. Here’s why:

  • Mixing new and existing subscribers skews revenue attribution.
  • Lack of direct segmentation by audience group requires extra manual work.
  • No built-in way to analyze revenue shifts across placements makes identifying cannibalization difficult.

Apphud’s Overall Proceeds solves these gaps by providing clear, experiment-focused revenue insights right at the - Experiment Analytics - where it can be checked along with the other metrics to verify the true revenue uplift.

It enables paywall testing with the next confidence level and ensures A/B test success translates into actual revenue gains.


Conclusion: The Devil is in the Details

A/B testing is only as effective as the metrics you use to evaluate success. Relying on conversion rates or ARPU alone can lead to false positives, where a high-performing paywall cannibalizes other revenue streams.

Overall Proceeds ensures that your optimizations lead to actual business growth - not just better-looking numbers.

Analyze Overall Proceeds in your next A/B test to maximize revenue growth.

Nataly
Nataly
Head of Marketing at Apphud
7+ years in product marketing. Nataly is responsible for marketing strategy development and execution. Committed adherent of the agile methodology.

Related Posts