Split Testing — A Way to Improve Open Rates and Conversions

Split testing

If you create and send email newsletters regularly, you may be wondering how you can increase the number of people reading them.

Here’s one way: by using split testing.


What is split testing?

Split testing involves sending variants of your e-newsletters to some of the people on your mailing list, monitoring the performance of each, and sending the ‘best’ version to the remainder of your list.

It generally involves four main steps:

  1. You create two or more versions of your e-newsletter.

  2. You send these different versions to a small percentage of email ddresses on your mailing list.

  3. You compare how each version of your e-newsletter performs in terms of either opens or click throughs.

  4. You roll out the best-performing version to the remaining email addresses on your list.

Multivariate testing versus A/B Testing

Strictly speaking, there are two types of split testing: ‘A/B testing’ and ‘multivariate’ testing. A/B testing involves just two versions of an e-newsletter being tested, and multivariate (as the name suggests) involves several.


What sort of things can I test?

There are a variety of things you can test, including:

  1. Subject header — the title of the email that recipients see in their inbox (does including the recipient’s name in it help? Is a longer or shorter subject header better?)

  2. Sender — the person who the email is coming from (open rates may vary, for example, depending on whether you send your email using a company name or an individual’s)

  3. Content — different text or images in the body of your email may elicit different responses to your message, and consequently influence the number of click-throughs.

  4. Time of day / week — you can test different send times to see which generate the most opens and click-throughs.

With all the above variables, you will need to decide whether to pick a winning e-newsletter based on open rate or click-through rate.

Open rates are generally used to determine the winner of subject header, sender and time-based tests; click-through rates tend to be used as a measure of success when establishing which sort of content you should use in an e-newsletter.


More sophisticated split testing

If you want to be very clever about things, you could run sequential tests – for example, you could carry out a subject header test, pick a winner and then subsequently run a content-based test using 3 emails sent using that subject header but with different copy in them.

Or alternatively, you could use ‘goals’ as part of your split testing, to see which of your e-newsletters are best at generating conversions for you.

For example, you could test 2 different versions of your newsletters against each other to see which generated the most sales of your products.

Many email marketing solutions allow you to add code to your post-sales pages on your website which allow you to track these conversions.

The more complex your tests, however, the more time-consuming it all becomes – you may need to start segmenting lists, spend a lot of time on copy-writing and so on.


How do I carry out a split test?

Most popular e-marketing solutions – such as Getresponse, Campaign Monitor, Aweber and Mailchimp – come with split testing functionality.

This lets you create different versions of your e-newsletter, choose sample sizes, specify whether you want to measure success based on open rates or click-throughs, and then they handle the rest of the test, sending the best performing e-newsletter automatically to the remainder of the email addresses on your list.

Access all our free digital marketing resources

Join 10,000 other subscribers who love our exclusive tips on digital marketing and how to run a successful online business. Sign up to get all our key resources in your inbox, plus other news and offers from us — just enter your details below.

We respect your privacy, and you can unsubscribe any time. View privacy notice.


Split testing and statistical significance

The key thing worth remembering about split tests is that the results have to be statistically significant – otherwise you can’t have confidence in using them.

This means

  • using a mailing list that contains quite a lot of records (Aweber suggests only split testing when you are dealing with a list containing more than 100 email addresses)

  • testing using sample sizes that deliver meaningful results

The maths of split testing is surprisingly complicated, and it is quite easy to run split tests that seemingly produce winners but don’t actually have any statistical significance.

It’s relatively straightforward to work out correct sample sizes for simple A/B tests — Campaign Monitor have a good guide to A/B sample size here — but working out the best approach to samples for multivariate tests is tricky. 

As a rule of thumb though, using larger percentages of your data in tests and running longer tests will deliver the most accurate set of results.


Which tool is best for split testing?

When reviewing the most popular e-marketing apps, we’ve found Getresponse to have the best split testing functionality (it allows you to test more variants of e-newsletters against each other than its key competitors); Mailchimp and Aweber good too. 

Campaign Monitor’s split testing functionality is pretty basic, in that only 2 versions of your e-newsletter can be tested against each other; and Mad Mimi doesn’t currently offer it as an option.

No comments

Your email address will not be published. Required fields are marked *