What’s new in Apphud: Free Trials Performance Report, Paddle Integration, SDK Updates, and moreLet’s see
Apphud
Why Apphud?PricingContact
Jane
Jane
June 30, 2023
12 min read

What is mobile app A/B Testing? How to run and analyze an A/B test

The mobile app market continues its rapid development, accompanied by growing competition among applications. In such an environment, it is necessary not only to attract traffic to your app but also to monetize it.

What is mobile app A/B Testing? How to run and analyze an A/B test

To achieve this goal, it is essential to know which set of products sells best at what price and the actual value of a subscriber in a particular region of the world.

One of the tools that allow for quick answers to these questions is mobile app A/B testing. Modern software development is based on hypothesis testing systems. The best practices of large companies rely on creating a product based on data from A/B tests.

Stages of mobile app A/B Testing

Hypothesis definition

The first step is to define a hypothesis and establish specific goals. For example, the hypothesis could be that changing the design of the onboarding paywall will lead to an increase in the conversion rate by x%, or offering a free trial version will increase the conversion of paid users by y%. The goals should be clear and measurable, such as increase the purchase conversion from onboarding by 5%.

How to formulate a hypothesis correctly?

Here's a brief framework for defining a hypothesis:

  • Because we saw (data/feedback)
  • We expect that (change) will cause (impact)
  • We'll measure this using (data metric)

Example hypothesis

  • Based on the data from a conducted survey, we observe that users feel the lack of a trial version for the subscription.
  • We expect that adding a three-day trial period to the monthly subscription paywall will increase the purchase conversion.
  • We will measure the change in values using the conversion rate from trial to purchase for this specific product.

Let's examine the stages of formulating a hypothesis in more detail.

Research

First and foremost, it is necessary to conduct research and gather both quantitative and qualitative data to identify a specific problem that you will address within the experiment. For example, according to your statistics, when segmenting by paywall, you observe that Paywall 1 has five times more purchases compared to Paywall 2.

Regular subscription Chart, ApphudRegular subscription Chart, Apphud
Proceeds Chart, ApphudProceeds Chart, Apphud

Once you have gathered all the data and noticed, for example, that users frequently request a trial version (or complain about high prices or a cumbersome onboarding process). As discussed earlier, within one experiment, we identify which problem we will address. Let's move on to the second step.

Expected changes and their estimated impact

The next step in forming the hypothesis is to document the expected changes and their presumed impact. For example, based on your research, you assume that providing a trial version of the product on the onboarding screen should increase the purchase conversion rate from 3% to 7%.

Metrics to determine goal achievement

The final step in forming the hypothesis is to determine the metrics by which we will measure the expected outcome. We recommend tracking at least these metrics: Views, Trials, Purchases, Proceeds, Refunds, ARPU, and ARPPU.


Creating an A/B Test

As you know, there are multiple factors that influence a user's decision to subscribe to a mobile application, including subscription price, availability of a free trial period, the appearance and information on the paywall, and other aspects. 

Within an experiment, you can modify various paywall parameters:

  • List of products
  • Product prices
  • Paywall appearance (background, buttons, font size, color, etc., using the JSON config).
There is one golden rule: when conducting an experiment, focus on testing a single hypothesis. If, within one experiment, you change the subscription price, add a product with a trial period, and modify the paywall appearance, you won't be able to determine which specific change led to the observed result.

Components of an A/B test

Once you have formulated your hypothesis and determined the experiment's goal, you need to define the following parameters for your experiment:

  • List of product and their prices
  • Number of testing variations and distribution ratio
  • Target audience
  • Allocation size
  • Duration of the experiment

Products and prices

Decide on the prices and products that will be featured in each tested paywall. For example, you can compare the current onboarding paywall with a variation that includes a product trial. Or you can compare a monthly subscription with a trial period against a 3-month or annual subscription, and so on.

Variation example, ApphudVariation example, Apphud

Number of variations

You can run an a/b test with multiple variations of paywalls (multivariate testing), each featuring different products and prices. The number of variations depends on the complexity of the changes you want to test and the resources available for analysis.

The more variations there are, the more time is needed to conduct the experiment in order to have the necessary number of users in each variation.

Audience

Before conducting the test, it is necessary to decide which users will participate - new, existing, all users, a specific segment (users from a particular country, or those who had a specific event such as trial cancellation or subscription cancellation), etc.

Alternatively, you can target an audience of users who have previously shown interest in our product but did not become paying customers, as we assume they might be more receptive to our product. In this case, when selecting the audience, specify "Non-paying users" or create a custom audience by choosing users who had the event Paywall payment canceled.

Allocation size

It is also important to determine the allocation size. The most common approach is to test 3-5 variations with an equal distribution of the sample. For example, 33% + 33% + 34% or 25% + 25% + 25% + 25%. Alternatively, you can allocate a larger percentage of users to the variation that is expected to be more successful in order to gather more data and obtain a more accurate estimation of the effect. You could try allocations like 70% + 30% or 80% + 20%.

The larger the percentage of the sample, the more time may be required to obtain the results of the test.

A/B Test duration

You can increase the duration of the test to obtain more accurate results. However, it is important to consider that conditions may change over time, which can impact the test outcomes. For example, seasonal increases or decreases in purchases within your industry.

Let's consider the following example for calculating the sample size. On average, 1,000 new users visit the onboarding process per day. If we conduct the test for 1 month, each variation of the onboarding paywall will be seen by approximately 15,000 potential users. This sample size should be sufficient for making objective decisions.

An example calculation using the AB Tasty calculator.An example calculation using the AB Tasty calculator.

It is recommended to conduct the a/b test for at least three weeks to account for differences in user behavior across different days of the week and seasonal factors.

Once you have set all the parameters of the experiment, launch the test and wait for it to complete. Some users may notice that in the early stages, one variation significantly differs from the others in all metrics, and in such cases, some beginners make the mistake of stopping the test before the planned duration. However, at the end of the experiment, it may be revealed that there is no actual effect or the test result is negative. Therefore, it is important to wait until the planned period is completed.

Remember, the larger the sample size, the more accurate your test results will be.

Analysis of A/B Tests Results

After the completion of the mobile a/b testing, the data is analyzed to assess the presence of a significant difference between the variants and determine the likelihood of randomness. Based on the analysis, a decision is made: if one of the variants shows statistically significant improvements compared to the other variants, a decision is made to implement the changes from that variant to all users.

When conducting an A/B test, it's important to remember that the conversion to trial or purchase is a short-term indicator of changes. While one variation may have a better conversion rate at the initial stage of testing, another variation may emerge as the winner in long-term metrics such as ARPU/ARPPU (Average Revenue Per User/Average Revenue Per Paying User).

Let's analyze an experiment using a specific example.

A/B test results example, ApphudA/B test results example, Apphud

From the analysis of the metrics, we can see that Variation B has a higher conversion rate to purchase, but Variation A has a higher average revenue per user (ARPU) compared to Variation B.

Additionally, it's important to consider the cost per install (CPI) and compare it to the ARPU. If the ARPU is lower than the CPI, we recommend analyzing historical data by geography, generating new hypotheses, and creating new variations in price, a list of products on the paywall, and the appearance of the paywall for retesting. If the new variations do not result in an increase in ARPU, we suggest reevaluating the hypotheses and trying again. Ultimately, you will find the optimal budget and rate to maximize profitability.


Conclusion

In conclusion, mobile A/B testing is a powerful tool for understanding user preferences and behavior, as well as optimizing software development.

This tool is an integral part of modern software development and can provide a significant competitive advantage in the market.

Through experiments, the following metrics can be improved:

  • Increase the conversion rate to purchase
  • Increase ARPU/ARPPU
  • Increase revenue

Additionally, experiments can provide insights into:

  • Optimal pricing for different countries
  • Popular products and ideas for their development

A/B tests enable developers to better understand their audience and offer products that best meet their needs. By using A/B testing, companies can make data-driven decisions that contribute to increased revenue and project success.

To get more insights regarding subscription app revenue growth read the Apphud Blog.

Jane
Jane
Head of Business Development at Apphud
10+ years of experience in Project Management and Business Development. Jane began her professional journey as a Sales Manager. Over time, she successfully established herself as Product Owner, and BizDev Lead.

Related Posts