Determining Conversion Probability

How conversion probabilities are built.


A post from Webtrekk Head of Data Science

Markus Schaal

This is the first installment of the Dev Blog series, posts written in the language of programmers

 

The conversion probability of a visitor to your website is the likelihood of this visitor becoming a buyer – i.e., the likelihood that he or she will convert. This probability is the key to personalised targeting.

Let’s imagine a scenario: An old shopkeeper in a little village will look at you, his potential customer, and try to understand whether you will buy something and whether it is therefore worth his time and energy to invest in you. The shopkeeper must size you up based on your age, your clothing, what your mood is (or appears to be).

Luckily for those of us in the online world, we don’t need to make assumptions based on, say, how tired the person is walking down the street. We can instead utilise conversion probabilities based on user profiles and mountains of historical data.

Let's look at two different ways to determine the conversion probability of a website visitor:

1. The RFM-Model is a huge leap towards a personalised evaluation of conversion probability. In the Webtrekk Digital Intelligence Suite, a marketing expert can choose the boundaries of low, middle and high values of recency (R), frequency (F) and monetary value (M) of a visitor. She can then analyse to what extent the various groups converted in the past, and use this historical data as the foundation for future campaigns.

2. Machine Learning is an anonymous way to use various parameters for the estimation of individual conversion probability. In the new URM Explorer of the Suite, you can calculate the likelihood of conversion for each website visitor with state-of-the-art machine learning methods. After determining the most relevant factors for conversion probability – e.g., buying rate or engagement during the most recent visits – a model is conceived by supervised learning to connect the vector of input values to the resulting conversion probability. By applying this model, conversion probability can be computed for each URM visitor using his current properties as input values. Typical algorithms for supervised learning include logistic regression and random forests.


But do we need that? How much better is logistic regression than the RFM Model? The ultimate answer to this question will be given by our customers while comparing RFM-powered marketing action with machine learning-powered marketing action. (This comparison can be carried out with Webtrekk’s new landing page optimiser.)

Meanwhile, we may want to look at the F1 score to compare the quality of these two methods. Let’s look at Wikipedia’s definition of the F1 score:

In statistical analysis of binary classification, the F1 score (also F-score or F-measure) is a measure of a test's accuracy. It considers both the precision p and the recall r of the test to compute the score: p is the number of correct positive results divided by the number of all positive results, and r is the number of correct positive results divided by the number of positive results that should have been returned. The F1 score can be interpreted as a weighted average of the precision and recall, where an F1 score reaches its best value at 1 and worst score at 0.

In our context,

  • precision would be the rate of correct predictions of conversion (conversion probability greater than 50% and truly converted) among all our predictions of conversion (conversion probability greater than 50%, either truly converted or not)

  • recall would be the rate of correct predictions of conversion (conversion probability greater than 50% and truly converted) among all true conversions (truly converted, with conversion probability greater, equal or less than 50%)

 
Let's start with a simple RFM Model:

  • Recency Groups: 3 (0-10 days), 2 (11-50 days) and 1 (beyond 50 days)

  • Frequency Groups: 3 (more than 50 visits), 2 (11-50 visits) and 1 (0-10 visits)

  • Monetary Value Groups: 3 (more than €100), 2 (€51-100) and 1 (€0-50)


For each RFM Group, we predict the conversion probability based on historic data from the last 12 months.

We get the following results from a real-life sample of about 75,000 visitors:


In this case, the resulting F1 Score was 0.35.

Now let’s manually optimise the RFM groups to obtain better results:

  • Recency Groups: 3 (0-20 days), 2 (21-100 days), and 1 (beyond 100 days)
  • Frequency Groups: 3 (beyond 100 visits), 2 (21-100 visits), and 1 (0-20 visits)
  • Monetary Value Groups: 3 (beyond €100), 2 (€51-100), and 1 (€0-50)


The results changed as follows:


The resulting F1 Score was 0.40.

Since the F1 Score performs best when error types I (wrong predicted conversion) and II (missing predicted conversion) are in balance, it is better to predict the same number of conversions as are actually converted. To that end, we reduce the needed conversion rate per RFM group to 30% instead of 50%.

This results in the following improvement:


The resulting F1 Score was 0.55.

Well, that is not bad compared to the chances of predicting conversion correctly without RFM grouping.

However, the exact same dataset reached an F1 score of 0.94384 with our new and fully automated machine learning module (URM Predictions, a module of the Webtrekk Suite) while using (only) the following parameters:

  • Recency
  • Frequency
  • Monetary Value
  • Orders per Visit
  • Value per Buy
  • Order Frequency


To summarise, machine learning exhibits a vast improvement for the prediction quality of conversion probabilities in the underlying use case and is therefore likely to power automated marketing campaigns that are much more successful than those based on RFM models only. And machine learning campaigns will obviously perform better than campaigns – which are all too common – that do not have even RFM models in place.

Of course the world is not always perfect. Sometimes the F1 score is much lower when using only six parameters. But then we can dig much deeper with thousands of potential parameters for each customer hidden in our huge data reservoirs. That’s a topic for another day…

 
GET IN TOUCH
Let's talk