Split testing: a simple way to get more people reading your e-newsletters (and clicking the links they contain...)
If you send e-newsletters regularly, you probably find yourself wondering how you can increase the number of people reading them. Here's how: split testing.
What is split testing?
Split testing involves sending variants of your e-newsletters to some of your mailing list, monitoring the performance of each, and sending the 'best' version to the remainder of your list. It generally involves four main steps:
- You create two or more versions of your e-newsletter.
- You send these different versions to a percentage of email addresses on your mailing list.
- You compare how each version of your e-newsletter performs (in terms of either opens or click throughs).
- You roll out the best peforming version to the remaining email addresses on your list.
A/B testing and multivariate testing
Strictly speaking, there are two types of split testing: ‘A/B testing’ and ‘multivariate’ testing. A/B testing involves just two versions of an e-newsletter being tested, and multivariate (as the name suggests) involves several.
What sort of things can I test?
There are a variety of things you can test:
- Subject header – the title of the email that recipients see in their inbox (does including the recipient’s name in it help? Is a longer or shorter subject header better?)
- Sender – the person who the email is coming from (open rates may vary, for example, depending on whether you send your email using a company name or an individual’s)
- Content – different text or images in the body of your email may elicit different responses to your message, and consequently influence the number of click-throughs.
- Time of day / week – you can test different send times to see which generate the most opens and click-throughs.
With all the above variables, you will need to decide whether to pick a winning e-newsletter based on open rate or click-through rate. Open rates are generally used to determine the winner of subject header, sender and time-based tests; click-through rates tend to be used as a measure of success when establishing which sort of content you should use in an e-newsletter. If you want to be very clever about things, you could run sequential tests – for example, you could carry out a subject header test, pick a winner and then subsequently run a content-based test using 3 emails sent using that subject header but with different copy in them. The more complex your tests, however, the more time-consuming it all becomes – you may need to start segmenting lists, spend a lot of time on copy-writing and so on.
How do I carry out a split test?
Most popular e-marketing solutions – such as Getresponse, Campaign Monitor, Aweber and Mailchimp – come with split testing functionality. This lets you create different versions of your e-newsletter, choose sample sizes, specify whether you want to measure success based on open rates or click-throughs, and then they handle the rest of the test, sending the best performing e-newsletter automatically to the remainder of the email addresses on your list.
Split testing and statistical significance
The key thing worth remembering about split tests is that the results have to be statistically significant – otherwise you can’t have confidence in using them.
- using a mailing list that contains quite a lot ofrecords (Aweber suggest only split testing when you are dealing with a list containing more than 100 email addresses)
- testing using sample sizes that deliver meaningful results
The maths of split testing is surprisingly complicated, and it is quite easy to run split tests that seemingly produce winners but don’t actually have any statistical significance. It’s relatively straightforward to work out correct sample sizes for simple A/B tests - Campaign Monitor have a good guide to A/B sample size here - but working out the best approach to samples for multivariate tests is tricky. For a bit of a primer on the latter, you might wish to read this article on split testing samples from Lucidview. As a rule of thumb though, using larger percentages of your data in tests and running longer tests will deliver the most accurate set of results.
Which tool is best for split testing?
When reviewing the most popular e-marketing apps, we’ve found Getresponse to have the best split testing functionality (it allows you to test more variants of e-newsletters against each other than its key competitors); Mailchimp and Aweber are very good too. Campaign Monitor’s split testing functionality is pretty basic, in that only 2 versions of your e-newsletter can be tested against each other; and Mad Mimi doesn’t currently offer it as an option.