Many companies overestimate themselves when they say they are data-driven. In reality, they don’t even have the basics in order. The danger is, as a result, all the effort they put in has very little return. A/B-tests are the absolute basis of a data-driven organisation. Read on for my five must-have basics, before you even consider things like AI.
Data is the starting point for just about every decision made in the digital world. So it’s not surprising that many organisations swear by the motto ‘test, test, test’- or to be more specific; A/B test, test, test. For example, Booking.com, where they take pride in the thousands of A/B-tests that they have running simultaneously. I often come across businesses that aspire to operate on such a highly data-informed level. Which, in itself doesn’t have to be a problem. As long as a company is “data mature” enough and a solid basis for collecting data is in place.
It occurs to me that there are a lot of organisations who overestimate themselves in this area. They claim to be data-driven but, in reality, they can be better described as data-aware or, at most, data-savvy. As far as I’m concerned, the following five points on how you run a valuable A/B test are the absolute basics of a data-driven organisation.
If one of the following points is not part of your organisations’ foundation, it would be wise to take a step back, recognise that you are not a FAANG Internet giant, and fix these basics first:
1. Statistical knowledgeBefore you even start thinking about A/B-testing, some statistical knowledge is required. For example, it’s imperative that you know the difference between correlations and causality. In addition, it’s essential to know that you can’t generalise results just like that. For example, say you’re looking to find out whether to use a red or a green button, and the A/B-test shows that red converts better. This does not mean that from now on you should make all your buttons red. At the moment of testing red just came out better for that particular sample. In the context of the rest of the website content, however, perhaps yellow would have been even better.
2. Define a processIf you are serious about working on your conversion, it’s necessary to define a standard process for it. Otherwise, you introduce yet more factors that potentially could disrupt your outcome and it will be impossible to compare test results. How the process should look varies per organisation. However, you could begin, with an analysis of the customer journey of your users, for example. This way you discover exactly where your website users are coming from, and the steps they take to get there. This can be followed up by a conversion review to determine which web pages, content or other resources are your biggest conversion killers. Then the focus can be shifted to the visitors, by studying their behaviour and interaction based on heat maps, click maps, move maps and scroll maps.
3. Use the right KPI’sA/B testing often only looks at the top KPI’s or macro conversions, like information requests or an order. While these are important parts of the conversion funnel, the micro-conversions from earlier steps in the funnel are just as important.
By optimising them, you ensure that more visitors end up at the bottom of the funnel. The optimisation of micro-conversions is, naturally, more complex because the focus is more on subtle measurements than just clicking on the order button. This is about topics like bounce-, click-through and exit-ratio’s, which can be influenced by a multitude of external, substantive or technical elements.
4. Maintain a fixed test durationBefore going ahead with an A/B-test you need to define clear start and endpoints. Allowing an A/B-test to run until there’s a statistically significant winner (for example, until the red button comes out better than green, or vice versa) may seem logical, but it’s not: It means you’re ignoring the possibility that for the sample, it may not matter which colour is used. And that is just as realistic an outcome.
No result is also a result. So always define the duration before each test (whether or not in combination with minimum sample size). For example, one business cycle or a certain number of weeks. Don’t forget to take into account factors like “cookie defection”, which can cause test results to lose reliability over time.
5. Request real feedback from usersSomething I tell every organisation I work with is that A/B testing is not the ultimate way to improve conversion. Suppose a simple website from the early nineties with thick capital letters and a few big buy-buttons converts better than a colourful page with great texts and beautiful images, is this the best option? Spoiler alert: probably not. The goal is to improve, and for that, it is imperative that you really engage with your visitors. Not just via an online poll with multiple-choice questions, but real conversations, in which you listen closely to criticism and any suggestions. Or use panel research, perhaps even based on eye-tracking. The more diverse your range of test methods, the more valid your findings will be.
A/B-testing is a useful tool for improving your conversion but by no means the holy grail. Use it to optimise smaller details in the design or structure of a web page, but ignore it as a method of innovation. The real conversion killer is how you appeal to website visitors, drum up enthusiasm and help them on their customer journey. This particularly requires empathising with your target group, being creative and having a well-organised conversion optimisation process.
This article was previously published on Marketingfacts.nl, written by:
Founder & CEO
+31 20 369 79 77
Sumatrakade 91019 BJ AmsterdamThe Netherlands+31 20 369 79 77
4-5 Bonhill St, ShoreditchEC2A 4BX LondonUnited Kingdom+44 20 3807 7689