Test to achieve even better results
When using A/B split testing (explained below), there is much to learn regarding meeting the customers’ behaviors and eventually reaching a better conversion rate. However, companies often forget that an A/B test of a website and of a newsletter are two completely different things, which need to be handled in completely different ways.
The core of the difference is, simply put, that a visitor has visited the website at a given point in time when the test is being carried out. But it’s impossible to know at which point in time the test subject will open their newsletter – if, that is, it gets opened at all. The time lag and the uncertainty are factors that need to be taken into account, regardless of whether you want to measure opening rates or click frequency.
It is a good idea to A/B test newsletters or campaigns, but there are some things that need to be considered in order to reach a result where you are absolutely sure that the winning version really is the best one. Before you begin your testing, adopt an attitude of welcoming surprises and being open to working with unforeseen results.
What is an A/B split test?
First, let’s identify what an A/B, or A/B/C/D, split test actually is. You start by creating two or more versions of your newsletter or your campaign. Then, you send the different versions to randomized groups of recipients from your recipient lists (the size of your test groups is up to you). After a period of testing, you determine which version has worked best, and that version is then sent to the rest of the list automati- cally. If you wish, you can control it manually instead.
Test the right things
The most common things to test when using A/B split tests on newsletters are open- ing rate or number of clicks. When testing opening rate, you will usually be working with different versions of subject lines.
If you want to test different types of content, you will want to measure click rates. There are many different kinds of content that can be tested. Some kinds you can test to optimize the current communication, other kinds can be tested in order to draw more long term conclusions.
If you want to find out what kind of content is most appealing to your recipients, click rates is what you want to measure.
Examples of things to test for content optimizing
- Special offers
- Color and shape of buttons
- Composition
- Images
- Link placement
- Text length
Prolong the time span of your test
Since you don’t know when your recipients will be opening the newsletter, you need to have a rather long time span for your test. The point is that as many recipients as possible should have time to open the email before you determine which version works best.
If you’re the eager type who wants the tests nished quickly, you run a high risk of getting misleading results. The test might show that version A is the best one during the time allotted, and that version may then have been sent to the rest of the recip- ients. But an hour later, more people in the test group have had time to open the email, and now it transpires that version C is actually the winner.
This is important to consider. How you should handle this depends on how much you know about the behavior of your recipients. Do they usually open their emails as soon as they get them, or is there a certain time when you want them to read them? How many of your recipients read their emails on their mobile phones, and do you have the same opportunities for driving conversion there?
In order to get a more reliable result, you should expand your testing time span as much as possible. If you’re planning a campaign, it might be a good idea to have a time span of 24 hours. During that time, the majority of your test group should have time to make a choice whether to open the email or not. If you don’t have that much time, three hours should be your minimum. Unless, of course, you’re sending the emails in the middle of the night or on a major holiday.
Make a plan
Before you start your test, decide what you want to achieve. It’s good to have a hypothesis to test, because then you’ll acquire not only a test result, but also new information about your recipients. You can, of course, carry out random tests when you feel like it, but then you should be aware that the testing is in fact random, and not a basis for long-term conclusions.
What’s a good test group size?
In order for a newsletter split test to be statistically justi able, Compost recommends that you have a list of at least 10.000 recipients. If you want to test more than two versions, with an A/B/C/D test, you need a list of at least double that size.
You can carry out split tests with smaller lists too, but unless you get a very clear re- sult, it’s dif cult to know whether differences are really down to a better version of the email or just random. Remember, you can test because it’s fun and interesting too!
Think about this when A/B testing
- Decide which newsletter elements you want to test alternatives to
- Identify what the alternatives can be
- Weed out the alternatives that you would not choose to use, even if they came out on top
- Decide which kind of measurement shows the best alternative for the different elements – opening rate or click rate
- Make a plan for what to test and in which order
- Test one thing at a time
- Make sure that your test time is long enough to allow a representative amount of your test group to open the email. The longer the test time, the more representative the result
- If you want to be able to draw long term conclusions, repeat your test at least three times
This is an article in the Carma Campus Class in Analyze.