As you already know, design is really about putting together a set of elements or an overall look that you like–or even one that your client likes. It’s more about putting together something that is pleasing to the consumer’s eye and helps further the client’s conversion goals. The best way to figure out what does that is to perform A/B testing.

A/B testing (or split testing) is the process of presenting separate design variables to visitors and measuring the results. It’s a great way to test out the market. When done right it can greatly increase conversion. But you need to know the fundamentals of process to get the best results.

Use the Right Tools

Depending on the platform, your level of expertise, and any specific needs, you have plenty of options on the market for A/B testing. If you are a coder, you can do it that way. On the other hand,  both Optimizely and Unbounce are two options for those are not proficient in code or need the work done fairly quickly. Pricing varies for each.

But it all boils down to two necessities. First you need to be able randomly show variables of design to your visitors. Secondly, you need to be able to collect information about how that user reacted to the variables to show them.

Only One Variable At A Time

When doing split testing, it’s important to take a micro approach. You have to test each item separately. Microlevel testing takes time but it yields more accurate results. It’s easier to tell what is affecting conversion.

Let’s say you’re testing the conversion rates associated with design on a promotional gifts site, like BrandMe magnets. Each portion of your design must be tested in a different test. You may test the color of the shopping cart in one session and then test the position of it in another session. During each of these tests, the rest of the page must stay the same, including products, copy, and other elements. As you get further along into testing, you may start to look at how certain elements work together. But again, you start on a microlevel and go from there.

Simultaneous Testing

A/B testing should also happen simultaneously. You can control a lot of factors in the design but there are other factors outside of your control. Using that promotions site we were talking about as an example, there could be any number of special circumstances affecting conversion rates on any one day. There may be a cold front that makes the personalized sweatshirts suddenly very desirable. Or perhaps a blog recently reviewed the selection of keychains and sent extra traffic your way. If you had chosen to test on different days, these events would skew your results and not give you the information you’re looking for.

Give It a Significant Amount of Time

A common mistake with A/B testing is not giving it the chance to collect information. As with any experiment, the more data you have the better your results will be. Avoid doing testing for short periods of time. Instead, plan for significant testing periods.

Think of it like a poll. If you stop two people on the street and ask them their opinion, it’s not as helpful for determining what popular opinion is as polling a cross section of 100 people. And if you could stop 1000 people, you’d get an even more accurate idea of what most people think. It takes longer to get more information so plan on spending it.

New Visitors Only

When using split testing to experiment with design, be careful to only use new visitors. Returning users already know what your site looks like. You can confuse them or–even worst–annoy them with changing design on things they didn’t know were broken in the first place.

Once you’ve included a user in your testing, make sure they see the exact same version each time they visit your site for the entire time you’re running the test. Again, it’s about avoiding confusion and keeping the results as accurate as possible.

Speaking of results, don’t fall into the trap of arguing with them. You went through all this trouble of putting together split test so don’t defeat the purpose by trying to make them say what you want them to say. There are times when some interpretation is necessary, but for the most part, whichever data group resulted in the best conversion rate is the one wins. And if you’ve followed these best practices, you won’t regret it.

Leave a Reply

Your email address will not be published. Required fields are marked *