The Do’s and Don’ts of A/B Split Testing Experiments

When a new website is designed to elicit some sort of action (i.e. a sale, registration, form completion) it is often assumed the work of the web designer and developer is completed when the site is live and bug free. However, launching or relaunching a site should only be seen as one event in a continuos process of conversion rate optimization (CRO).

Through A/B testing (comparing the effectiveness of one version of a website to another) a web site owner should be looking to continuously improve the conversion rates of their site whether that be a visitor filling in a form, making a purchase, watching a video, clicking a button or taking some other action that has value to them. Web designers, UX gurus and CRO experts will all have their opinions on how best to maximize conversions but until at least two different versions of a website can be compared with real users then no one can be sure if improvements can be made.

A/B Testing In Brief

In order to conduct a standard A/B split test, a web designer will need to normally compare Design A to Design B and distribute 50/50 of the site traffic between them. Performance metrics like conversion rates, bounce rates and visitor engagement  are then measured. Occasionally, you will see sites having more than 2 versions split test at the same time although this is not ideal in most cases.

Once a statistically significant amount of metrics are acquired and the results of the split test known, the best performing version of the site should then be compared to a new design (version C). Therefore, after each trial is run further improvements should be sought to find more incremental improvements by running another split test. Sometimes running continuos split tests may not be possible or advisable due to the effects on the visitors but in principle there should be a long term process of spilt testing and conversion rate optimization for a website requiring an action to be taken.

A/B Testing Website Infographic

Web Design Elements To Test

In order to get more conclusive results about what elements of a website are positively or negatively effecting it, it is best to focus on specific elements rather than split testing 2 completely different versions of a websites. The elements below can all have significant effects on conversion rates so should be tested.

  • Layout
  • Typography
  • Copywriting (headlines, sub headings, main body text)
  • Images or videos on the home page, landing page, and product pages
  • Product or service pricing
  • Promotional offers
  • Call to cction (CTA) elements like buttons, wording, color, and placement
  • Length of text
  • Headline or product description
  • Form length and field types

ab-testing-WEBSITE ELEMENTS

Do's of A/B Testing
Below are some the best practices for A/B split testing.

  • Use Proven Tools

There are several A/B tools that can help implementing split tests and it is recommended you use some external software to run a test. Google Analytics comes with an excellent free A/B test tool. Formerly known as Google Website Optimizer, this feature is now integrated in Google Analytics and renamed as Google Analytics Content Experiments. For those with the budget, it is recommended you try one of the popular paid solutions such as Optimizely, Unbounce and Visual Website Optimizer. WordPress uses may also want to look at wp-abtesting.com to set and track split test experiments for wordpress themes.

  • Patience and Knowing When to Stop

Running an A/B test requires lots of patience and stopping a test prematurely can lead to unreliable results. This tool will give you guidance on how long to run a test based on key parameters. On the flip side, running a split test for too long and not knowing when to quit can result in a site to losing conversions due to the poor design being kept live excessively. If it is obvious that one design is much inferior to another drop it and then move on to trying to better the winning design.

  • Consistency

Testing a variant website element should be consistent across the whole site. If you are testing a red  buy button on the home page, the same red buy button should appear on all the pages that the button is located. Showing different variants of the red buy button design will distort the test results.

  • Repetition for Repeat Visitors

A web design element or component that’s being tested should repeatedly be shown to repeat visitors. The same variation being shown to the same people will either consistently generate the same reaction or site visitors may change their feedback over time. The split testing software above will make sure this occurs.

  • Do Several A/B Tests

One A/B test is not sufficient to make final conclusions. There’s only three outcomes to split testing, positive result, negative result, or a no-change result. Repeating tests on the same element with different variants over different schedules can result in more positive results.

Dont's of A/B Testing

It is important any split test you set up is designed methodically and you avoid the common pitfalls that can lead to false conclusions. Here are some of the things you should avoid when A/B testing:

  • A/B tests between the two versions are not done simultaneously

Split testing designs should be done over the same time period if possible and the website traffic must be evenly split between the two versions too. The time of day, day of the week and month of the year all effect user behaviour and traffic volumes and so you must try to ensure they are as uniform as possible across the designs you are split testing. If you do not have the software to test simultaneously make sure you vary the sites over exactly the same days of the week and timeframes as possible (i.e. 7 days from Monday to Sunday).

  • Making Conclusions Based on Small Sample Sizes

Statistically significant results require an adequate sample size achieved over a suitable period of time (ideally at least 7 days). If you do not ensure yoy get a sufficently large sample size your results could be down to random variance alone.

  • Running Tests on More than One Element

This is a common mistake by people new to A/B testing. They tend to run tests on several website design elements which often produces confusing results. Too many elements under the same evaluation criteria can not lead to any concrete conclusions about what exactly is causing the differences in conversion rates.  If you have the time and budget split test segmented or single elements on each test to get better results.

  • Overruling A/B Test Results for Aesthetics Reasons

It is been shown time and time again that designs that may be deemed aesthetically inferior can convert much better than designs that may look more pleasing to the eye. If this is the case, do not ignore the results and choose the design you think looks better. If the website is designed to ‘convert’ visitors then this is the end goal and aesthetics should be secondary.