A/B Testing

Follow

Availability

The A/B Testing feature is available in Countly Enterprise and as an add-on in Flex.

A/B Testing allows you to test the user product experience and lets you improve it by safely rolling out the features strategically. With A/B testing, you can do “experiments” using the Remote Config feature by making changes to parameters and testing them to change the behavior and appearance of your app in a variety of ways across each variant. Knowing which variations tests better, you can make better product decisions.

Feature Dependencies

A/B Testing works closely with the Remote Config, Cohorts, and Drill features, so please ensure you have these enabled.

Additionally, some functionalities require using the User Profiles feature.

Benefits of A/B Testing

With A/B Testing, you can experiment using Remote Config by making changes to parameters and grouping them into multiple variants to alter your app’s behavior and appearance in various ways across each variant group.

A/B Testing takes advantage of the same behavioral cohorts mechanism for targeting the experiment users and your goal definitions. Simply put, you’ll be able to measure the impact of your Remote Config value variants on cohorts you’d like to see more of your users in, helping you optimize your user journey using data you already collect.

Getting Started

First of all, make sure the A/B Testing feature is enabled. To do so, go to Management > Feature Management in the Sidebar and enable the A/B Testing toggle.

As noted above, ensure that the Remote Config, Cohorts, and Drill features are also enabled and correctly integrated into your app.

A/B Testing Overview

Experiments

An experiment is a procedure in which you evaluate multiple variants using different Remote Config parameters you have created or will create for this experiment. Once you have completed the experiment, you can check which variant performed better than the other and based on what you observe; you can roll out the winning variant with the set parameter values.

In order to create an experiment, you need to make sure to have four things:

  1. A target audience (e.g., people using iPhone 12). Note that if you do not set a target audience, an experiment will be active for all users by default.
  2. A parameter to evaluate different results to see what works (e.g., the color of a CTA button)
  3. At least one variant that your users will see (e.g., green, purple, and red color for each variant)
  4. A goal for the experiment.

At the end of the experiment, you’ll be able to see the performances of each variant — compared to the baseline — and you can even set the winner variant as the default one.

Using A/B Testing

Once you open A/B Testing, you will see the following:

-V6-A-B-testing-Google-Docs.png

  1. An overview panel with the status of your experiments, labelled as, Running, Drafts, and Completed
  2. + Create new experiment button.
  3. Table view containing all created experiments. They can be filtered by status using the dropdown menu.
  4. Each experiment is listed, with each name leading to their respective detailed view to manage them individually along with a 3-dot ellipsis menu for further edits.

Creating an Experiment

You can create an experiment by clicking the + Create Experiment button in the upper-right corner, which will open the Create new Experiment drawer. This process consists of four steps Basics, Targeting, Goals, and Variants.

Basics

In this step, you define the experiment basics like name and description.

-V6-A-B-testing-Google-Docs-2.png

 

Targeting

In this step, you can describe your targeted audience on which the experiment will run, which includes the Percentage of target users from your app's total users and a Target Users filter, where you can choose the users based on their segmentation properties. The filter and percentage work as an AND condition. For example, in the image below, the experiment will target 50% of the app users using an iPhone 12.

-V6-A-B-testing-Google-Docs-3.png

 

Goals

This section is similar to creating cohorts, where you just set your goal for the experiment. You can set your goal based on either User Property Segmentation, User Behavior Segmentation, or both at the same time. The first goal will be your primary goal which will decide the experiment’s outcome, and the rest will be just additional goals.

You can set a maximum of 3 goals per experiment.

For example, the experiment in the image below aims to find a variant that leads to at least five sessions per user.

-V6-A-B-testing-Google-Docs-4.png

 

Variants 

You can create variants for your experiment in this section. For a variant, you can either choose an existing Remote Config parameter or create a new parameter that has not been created before.

Make sure that the parameter already exists in your app; otherwise, it will not take effect in the experiment.

Each variant will have the same parameters with the values you choose to set to them. Each variant will be competing against a control group. The "Control Group" is nothing but a variant itself against which all other variants test their performances. Any experiment must have at least one variant, plus the Control Group.

In this step, keep in mind the following:

  • You can have a maximum of 8 variants in an experiment.
  • In each variant, you can have a maximum of 8 parameters.
  • A parameter can only be involved in a single running experiment at once.
  • For any remote config parameter, the experiment values will be given priority over any existing conditional values or the default value, provided the experiment is running.

Any experiment must have at least one variant, plus the Control Group.

-V6-A-B-testing-Google-Docs-5.png

Managing Your Experiment

After creating an experiment, it will be listed in the tabular view along with all other experiments. You will access its detailed view when you click on the experiment name.

Once you click on an Experiment, the following view will appear:

-V6-A-B-testing-Google-Docs-6.png

 

    1. Stop Experiment  button is used to stop the experiment.
    2. Experiment specifications, including its current status, start date, and if “any of the variants”  is declared as a winner.
    3. Experiment Details cover the basic information about the experiment.
    4. Variants represent the number of users included in the variant for the experiment.
    5. View users button in the Variant grouping card allows you to check the user profiles present in the variant.
    6. Goals Overview shows you a brief description of the goals established while creating the Experiment.
    7. Goals Statistics gives you an overview of the current state of each goal in the experiment, which can be chosen using the dropdown menu



Starting the Experiment

Once all the fields in the steps above are set, you can click on Publish button to start the experiment.

-V6-A-B-testing-Google-Docs-7.png

 

Experiment Length

Once the experiment has started, it will go on for 30 days. After which, the experiment will be rendered inconclusive if no leader is found amongst the variants.

Stopping the Experiment

You can stop a running experiment from the 3-dot ellipsis menu on each experiment card. This action will move the experiment to the Completed stage and cannot be restarted again. An experiment can be stopped at any point regardless of whether a winner is found. If a winner is found or the experiment is inconclusive, it will stop processing thereafter, and you can end the experiment.

-V6-A-B-testing-Google-Docs-10.png


Monitoring the Experiment

Once an experiment has been running for a while, you can check its progress and see what the results look like for the users who have participated in your experiment so far. Just click on your experiment in the running section. You can view various statistics about your running experiment on this page, including general experiment information. Underneath, you will find the following information for each Goal:

  1. Variants: This shows the Variants involved in the experiment.
  2. Improvement: This is a measure of improvement of the variant over the baseline for the selected goal.
  3. Conversion rate: This is the conversion rate of the users falling into the subjected variants for an experiment.
  4. Probability beat baseline: The probability that a given variant will beat the baseline for the selected goal.
  5. Conversion number: Total user conversions for the variant.

The Winning Variant

To decide the winning variant of an experiment, we check if the lower limit of the conversion rate of a variant is 1% greater than the upper limit of the conversion rate of the Control Group. If so, we declare a winner and stop the experiment. This also ensures that there will always be a 1% conversion rate, even in the worst case.

Rolling Out a Variant

After you have a winning variant for your primary goal, you can roll out this variant from the experiment to 100% of the users. You can select whichever variant you like and publish it in Remote Config for all users moving forward. Even if your experiment does not have a conclusive winner, you can still choose to roll out a variant to all of your users.

By clicking the Publish Experiment button, you will see a drawer open up where you can choose your variant and roll it out.

-V6-A-B-testing-Google-Docs-9.png

Once the variant has been rolled out, you can check it in the Remote Config feature.

Looking for help?