science of success
Why Test? You test to identify the approaches, content and techniques that produce the most response and profit for your particular product or service. Testing can confirm your intuition, identify what works, what doesn't and what is irrelevant, and provide you with the information you need to optimize your marketing.
These are just a few of the questions you might use testing to answer.
Testing has traditionally been used to increase an organization's insight over time into what approaches work best for their particular products/services. However, with today's faster turn-arounds and lower production costs, some forward-thinking marketers are building tests into individual mailings by staggering drop dates. Take the example of this association whose goal was to maximize registration for a trade show conference.
The customer planned an overall mailing of 60,000 pieces, but randomly selected 10,000 contacts for a preliminary test. With a fairly simple test design, the customer tested a control (their best guess of what worked) against two additional offers, two additional formats, and two additional headlines/teasers. The results determined that an optimized combination could produced 42% higher response than the control, which was then used to determine the final piece mailed to the 50,000 remaining addresses.
Let's take a look at the various approaches to testing and analysis to see if this makes sense for you.
Direct Mail Testing Priorities
In selecting the most appropriate test design, you should:
Begin by evaluating if there are any tests you have already completed that you may not even realize; put another way, do you have underlying information related to the lists you have previously mailed that you can correlate to test results?
One example is the question of gender bias in your mail results: If you compare the ratio of men to women on your mailing list to the ratio of men to women in your response, are they the same? If not, your product and/or marketing appeals to one more than the other. This is fairly easy to identify based just on the availability of first names from the original list and the response list. To evaluate your lists, rely on the gender identification list tool at www.listwist.com/xgenderfinder.asp. You may have also mailed using a variety of list sources, a variety of formats, and so on. While mailings done at different times are not as reliable as a controlled test, nevertheless analysis may provide key insight into trends and approaches that work best for you.
A Note on Sample Size
Generally, the higher your typical response rate, the lower the sample size you will require. For example, a credit card marketer that averages a 1% response rate and wants to test ideas for a 10% increase (to 1.1%) would need a sample size of over 135,000 to identify whether a single A/B test is significant, at a 75% confidence level that the effect will not be missed. On the other hand, an email marketer testing subject lines with a typical open rate of 35% and looking for a minimum 3% increase in response would need a sample size of only 3,500 to determine whether one subject would outperform the other.
There's a free sample-size calculator to our list tools to help you identify the size required to design a valid test. Just visit www.listwist.com/xsample.asp to take advantage of this easy-to-use tool.
Avoid Testing Gotchas
Use a head-to-head competition with a control to determine the winner. Head-to-head, concurrent testing provides you with insight that is the least likely to be influenced by unknown factors. The more time passes between mailings, the more likely factors you are not taking into account may impact results, providing you with less confidence in the overall predictions.
With A/B tests, test everything or just one thing. You can gain valuable insight by either testing multiple items as a single group or by testing just one item. Bear in mind, though, when grouping sets of elements, all you will be able to determine is which group performs best, not which elements that comprise the group were responsible for the difference.
First test areas likely to give you the biggest response/profit boost. Experience shows that the most important areas to test are mailing list, offer, copy, format and seasonality, with mailing lists and offers being the two most significant. Don't start testing font options unless you thoroughly understand these two.
Make sure you can identify responses. If you don't code your tests and capture the code during the response/purchase process, there's no sense in testing. You must be able to track and analyze results or you can't possibly know what works.
Make sure your sample is large enough. To determine sample size, we use the test list calculator, at www.listwist.com/xsample.asp.
Use the 80-20 rule. Because they have a sense of what a control package may deliver based on prior experience, many mailers elect not to test because they don't want to give up those known profits on the portion of the list that might be sent a test, since this may not perform as well. The truth is, though, by not testing they may be missing out on even higher returns and profits. There's a solution to this conflict. Instead of testing on a 50-50 basis, test on an 80%-20%, on the minimum number our test list calculator determines to be statistically valid. That way, you can minimize the risk to profit while still gaining insight into what is the optimal approach for your list.
Best Response Rate vs. Highest Profit?
In the simple example below, testing a higher price reveals that even though the marketer can expect a slightly lower response rate, the overall profit is much higher with the higher price.