A/B tests are significantly more decisive when conversion instead of engagement is the focus. But this is easier said than done as changing website functionalities based on A/B test results alone can sometimes do more harm than good.
Today, we would like to share with you our recommendations on A/B testing – based on the extensive experience we gained while adding new personalisation features to our clients websites.
Common flaws of A/B testing in travel
- Just looking at click rates, when we should really be looking at booking numbers. But this can take months for a click to turn into a booking.
- Randomness isn’t considered. If Group B shows marginally better performance at the end of a testing period, is this really good enough to base business decisions on?
- Cookie churn and the impact of delayed booking is a challenge. People research on various devices for the same trip over weeks, thus they are likely to switch between test groups.
So, what do you do?
There are ways that A/B testing can be improved to better suit the travel industry. This includes:
- Analyse engagement in addition to booking data. Looking at areas of the shopping experience that have higher number of users can give excellent clues.
- Looking at other data available in addition to that gleaned from A/B testing. At bd4travel we use our personalisation technology to help with this.
- Segment, segment, segment. Cut the results from A/B testing in many ways to get a better picture of how users behave in different scenarios.
Is A/B testing really worth it in travel?
The answer is ‘yes’, but you need to implement A/B testing protocols that have been developed with the peculiarities of travel in mind. From highly complex products to the uniqueness of each traveller’s criteria – multiple factors must be designed into the testing process to get reliable results.