What are A/B Testing Statistics
A/B testing statistics refers to the statistical model used in the conduction of an A/B test (controlled experiment) which in the App stores is the comparison in performance between two variations of an App store page. An A/B test will be used to approve or disprove a hypothesis by testing just a sample of the entire population in the live store, before using the observations collected to predict with a reasonable level of accuracy how the entire population in the live App stores will behave.
Every statistical model has a number of pre-requisites (test parameters) that need to be met in order to conduct a reliable test that shows which iteration performs better.
There are three A/B testing statistical methods that can be used in different ways. The first is the ‘frequentist’ approach which ignores any previous findings or knowledge from similar tests, using only data from the current experiment. ‘Sequential’ testing doesn’t limit sample size so the data is evaluated as it’s collected and testing can be stopped once enough data is collected. The third is the ‘Bayesian’ method of testing, used by third-party A/B testing platforms like Storemaven. This method works on the statistical theory that it’s possible to calculate the conversion rate probability of a variation based on having a model that constantly evolves and is taking into consideration the incoming data throughout the test. This results in a more accurate model that allows you to reach a conclusion with a high degree of certainty.
Why A/B Testing Statistics are Important
Statistics is vital to the process of planning, running and evaluating A/B tests.
Simply put: failing to use the right statistical model when A/B testing would be a waste of time and money. The effective implementation of A/B testing statistics should translate to an increase in installs for the tested audience once the better performing page is applied to the entire population in the live App stores.