Top 10 Terrific Testing Tips

Are you basing copy and design changes on your gut feeling or what your boss prefers? Stop that! There is a better way to optimize your website.

Fight HiPPOs with data!

By using a tool for A/B and multivariate testing, you can see what your visitors prefer and let the numbers speak. This allows you to find out which website tweaks make your visitors take action and increase sales.

Don’t have a tool for testing? No sweat! There are numerous tools to choose from, but I recommend that you first try Google Website Optimizer. It is free (get it here), and with it, you can find out what features you really need and which are just nice to have.

When you get started with testing, you need to keep some things in mind.

1. Remember that best practice is somebody else’s practice
You can look at what others have been doing successfully, but don’t assume what worked for them will work for you. Depending on your audience and context, results will vary. An example: some experts say that one-step forms are always more efficient, whereas others claim that multi-step forms are superior. So who’s right? Well, they’re equally right and equally wrong. Tests have found that sometimes one-step forms work better, and sometimes multi-step forms do. Don’t assume. Test what works best for your audience and context. Oh, and if someone says that red buttons are better than purple ones, don’t believe them. There is no universally superior button color, but you may want to test contrasting colors.

Fight HiPPOs with data!

2. Be bold and break rules
You may have heard about Google’s testing of 41 shades of blue for the toolbar on Google pages, but unless you work for Google, forget about it. Minor changes will typically lead to minor improvements, and to find out if those minor improvements are statistically valid, you’ll need massive amounts of traffic. If you want results, you’ll have to make more drastic changes than that. You will often have to bend, if not break, company design guidelines in order to achieve substantial improvements. While doing so, it’s important to work closely with brand managers, so you do not step on anyone’s toes. Examples of what to test include images, tone, and functionality.

3. Don’t be greedy (limit your test)
Unfortunately, not everyone has the benefit of getting a lot of visitors, and traffic volumes may limit you to running an A/B split test. Maybe you think changing the primary headline, the main copy text, an image, the call to action, and a button will improve conversions. Do not, however, test all the changes at once in a single A/B test if you want to know what exactly causes the improvement (if any) and to find the best combination (for example, which call to action works best with which image). In the example above, there are five page sections (elements) with two variations each. They make 32 possible combinations. If you run that as an A/B test, you’ll have no idea what made the difference.

4. Calculate estimated time to completion
It is possible to create a test that will take years to end. Say you have six elements and three variations per element, your test page gets 5,000 page views per day, your conversion rate is 4%, and you expect an improvement of 10%. That multivariate test would take 30 years to complete even if you include all visitors in your test. Use a duration calculator to find out whether the test you’re planning to run is actually reasonable; Google’s calculator can be found here.

5. Assess risks
While you may think of testing as gambling, rest assured that the odds are on your side. What you risk is a smaller amount of conversions in the short term, but what you can gain is a lot of conversions in the long run. Even when your hypothesis turns out to be wrong, you’ll learn something: how to avoid making costly mistakes in the future. To minimize risks, you might wish to expose a smaller share of your visitors to the test. But bear in mind that if fewer visitors are included, the test will take longer to complete.

6. Validate your implementation
Before your test goes live, make sure to test your implementation. Things that could skew your results include the prevalence and accuracy of test scripts (for instance, ensure that the goal script is implemented only on the actual goal page) and the loss of referral data. Some tools, Google Website Optimizer included, automatically validate the test for you. Do not blindly trust this because it is possible for your implementation to be flawed regardless of what the automatic validation says.

7. Avoid conflicting tests
If you set up two or more simultaneous tests sharing the same goal, you risk ending up with inaccurate results. Say you want to increase conversions and are running one test to find out which headline works best and a second test to find out which image works best. The two tests won’t share data, so you won’t know what combination of image and headline a converting visitor has been exposed to. If you’re running several tests, make sure that they won’t add noise and uncertainty to each other.

Fight HiPPOs with data! (photos from my company, inUse Insights (new brand since May 2012: Outfox))

8. Look out for side effects
Wanting improvement is good. Being too eager to achieve results can, however, inadvertently lead to making costly mistakes—ones that are difficult to spot, too! While changes may increase conversions for one goal, they may decrease conversions for another. Typically a test is set up for only one goal. If you’re running an A/B test with Google Website Optimizer, use Google Analytics and custom variables to “tag” visitors. Then you’ll be able to find out how other goals and general visitor behavior are impacted by your test and its different variations. (Read more about the use of custom variables.) It’s also possible to include the goal script for Google Website Optimizer on multiple goal pages. But if you do that, both goals will be summed up together, and you won’t know the performance of each individual goal.

9. Try out segmented testing
The standard way of running a test assumes that one size fits all. While running any test is better than not testing at all, it is possible that one alternative/combination is better for direct traffic and another is better for traffic from Google AdWords (just an example). Therefore viewing the results for different segments will give you deeper insights. You can use custom variables to keep track of variations in Google Analytics and advanced segments to see how the different variations performed for different segments. You can also have a look at page 25 in The Techie Guide for Google Website Optimizer or find a tool more suitable for segmented testing at

10. Challenge the winner
So you’ve reached a statistically valid result of a test and announced the winner. Are you done now? No! Testing is not a one-time thing. It should be part of your process for continuous improvement. It’s not only possible, but likely, that there is a variation that will perform even better than the winner. That’s why a winner should be challenged from time to time. After all, Carl Lewis is no longer the fastest 100-meter dash runner, is he? At some point, there will always be a new winner.

View my presentation, “Top 10 Terrific Testing Tips”, from eMetrics Stockholm (PDF)