There are two things I want to convince you of in this article:
1) You need to do segmentation.
2) You need to A/B test.
In that order.
I could stop the article at this point, and let you go do those two awesome things. However, I think a bit of explanation is in order. Most convincing doesn’t happen after two sentences.
I’m going to lay out a series of propositional truths that will hit you so hard you can’t do anything but heed this instruction.
A/B Testing Is Not a Silver Bullet
I’m a huge champion of testing. My articles, workshops, seminars, tweets and rants usually have something to say about testing.
I’m passionate about A/B testing because I know it works. I’ve seen it work. I’ve helped to make companies millions of dollars through split testing. It is a proven way to gain more conversions.
But I also realize that testing is not a silver bullet.
Derek Halpern’s high-Fahrenheit rant about split testing put it this way, “I hate split testing. And I believe split testing is a big waste of time for many new business owners and entrepreneurs.”
Though Halpern’s over-the-top statement might have lit a few fires, it made a good point:
“Just because you can, doesn’t mean you should.”
I want to get more specific than that, though. I say you should, but I say you should do it strategically.
Why You Might Be Doing A/B Testing Wrong
A lot of marketers do testing wrong. They waste their time and their money by haphazardly conducting A/B testing, pulling out skewed results, and taking action that translates into zero conversion uptick.
That’s not testing. That’s messing around, and feeling good because you think you’re doing CRO.
Let me lay out three ways in which you might be doing your testing wrong. (Don’t worry; I’m going to get to the segmentation section soon.)
1. Testing for testing’s sake.
Many tests go awry from the very get-go. You hear the buzzword “split test,” and scurry off to create an Optimizely account.
No so fast.
Why are you testing? More to the point, what are you testing? Do you know the reason and strategy behind testing? Do you think just running tests is going to magically increase your conversion rates? Well, it’s not going to happen that easily, and I’m sorry to disappoint you.
Testing just because it’s sexy, hot, easy, cheap, fun or trendy is not a strategic approach to testing. This is testing without a true purpose.
Split testing does not equal more conversions. Strategic split testing increases conversions (redux Propositional Point No. 1).
2. Testing minutiae.
Here’s what some marketers do when they start “testing.”
- Let’s try capitalizing all the words in the sentence, and see if it makes a difference.
- Now, let’s try testing it with a green button.
- Try making this box 22 pixels longer.
- Let’s remove one line of white space.
This is testing minutiae. There’s a place for that — maybe. Like maybe when you have 10k unique visitors monthly, or a 29% conversion rate already, or a perfectly optimized landing page in every other area.
But testing minutiae — the tiny details and the insignificant stylistic features— isn’t the place to find big wins. You might see a little conversion increase ... or was that merely the natural shifting of the conversion breezes?
And how much sand-sized testing are you going to do? Every font fact, color combination, border size, and kern adjustment you can think off?
Testing the smallest elements is easy and fun, but it’s not always the way to get big wins.
Successful testing focuses on strategic points — a critical landing page for a product launch, or a really significant email campaign. A really good A/B test lines up with a really critical moment in a business’s existence.
3. Conducting non-data driven tests.
Successful tests arise, not from someone’s overactive imagination, but from the numbers, graphs, charts, metrics and statistics that you compile.
- “Oh, it looks like our mobile conversions are down by half! Let’s run a test …”
- “I’m noticing that we have 24% of our visitors from the U.K. I wonder if we should create a U.K.-only landing page with appropriate spelling changes.”
- “In the month of June, we had a 70% decrease in branded search traffic. Let’s test conversions based on these corresponding rise in non-branded organic keywords …”
- “I noticed that Landing Page D has a 65% bounce rate, which is worse than all the others. Let’s test it …”
The best tests arise from looking at data, forming a hypothesis, and testing that hypothesis.
In that way, it’s a lot like the scientific method. (Greetings, forgotten high school chemistry.)
- Observe and measure — Stare at Google analytics to find trends, patterns, and characteristics.
- Form a hypothesis — Murmur some possible reason that makes sense of the lines, percentages and pie charts.
- Conduct a test — Use your shiny new testing software to run some A/Bs.
When you structure your test around a numbers-driven reason, you are far more likely to gain some measure of conversion success.
True testing success equals testing and segmentation
In the previous sections of this article, I’ve laid out the mistaken practice of haphazardly A/B testing in a happy-go-testy age.
Everyone wants to test, so they blow out a bunch of general random tests, but they’re overlooking the critical metric-massaging magic. It’s called segmentation.
What is segmentation?
Here’s a nice definition of “segmentation” from Investopedia:
A marketing term referring to the aggregating of prospective buyers into groups (segments) that have common needs and will respond similarly to a marketing action. Market segmentation enables companies to target different categories of consumers who perceive the full value of certain products and services differently from one another.
Omniture’s “Segmentation Guide” put it like this:
Because each segment is fairly homogeneous in their needs and attitudes, they are likely to respond similarly to a given marketing strategy. That is, they are likely to have similar feelings and ideas about a marketing mix comprised of a given product or service, sold at a given price, distributed in a certain way, and promoted in a certain way.
Your website visitors are not all the same. They come from different countries, speak different languages, use different browsers, access the site using different search terms, click on different spots on your website, spend longer on different pages, use different devices, and a host of other subtle and vast differences.
So, why are you testing all visitors as if they were the same? The results you derive from generic non-segmented testing will provide illusory results that lead to skewed action.
What segments exist?
Understanding segmentation is important, but it prompts the question — what types of segments exist?
This is a huge question, and outside the scope of this article. There are a variety of broad types of segmentation that you can create to divide your audience.
Here is one set of segmentation approach from Chadwick Martin Bailey:
Online segmentation is naturally constrained by virtue of the platform and the limitation of analytics services. Here are some of the segmentation varieties that are unique to online visitors:
There are tons of directions to go with this type of segmentation. For example, the visitors who come to your landing page from an affiliate site may have a different mindset from those who find the landing page via organic search queries.
Thus, testing all traffic as if it were behaviorally monolithic would be mistaken. What you need to do instead is segment the visitor source, then test based on the types of clearly segmented visitors indicated by the data.
This isn’t as easy as just throwing together a simple split test and watching the numbers roll in. It requires intentionality, thoughtfulness and patience. Daniel Waisberg, an analytics advocate at Google, writes, “Choosing the right segments is not a trivial endeavor, it takes a lot of thinking.”
He’s right. Segmentation is complex and multi-faceted.
Segmentation is split testing taken to a whole new level. When you segment your testing efforts, you are adding a layer of accuracy and thoroughness that is simply not possible in a haphazard split testing world.
But the payoff is that you have a greater chance of successful testing. You possess a clear understanding of where the visitor comes from, what the visitor’s intent is and how to test that visitor’s behavior.
For an A/B test to be successful, the test must analyze the results of a certain audience segment, not the general audience as a whole. The interaction of the user as it affects the tested point is critical to the success of the test.
Testing for Differences is Testing for Accuracy
It’s time to admit that your web visitors comprise different groups of people. While the differences are endless, it’s important to gain an understanding of the lowest common denominator. When you do, you will be able to aggregate the audience in a way that makes segmentation sense for testing.
Your goal as an online marketer is to meet your customer’s need. You do so by understanding that need, and then delivering to that need. But in every niche, no matter how narrow, there are a variety of differences that affect the need.
When you admit that there are differences, and conduct split tests in such a way that take those differences into account, you will achieve a whole new level of accuracy in your testing.
Generic testing produces generic results.
The problem that I’m addressing in this article is one of generic testing. Testing becomes pointless when it is rolled out without any awareness of the differences among the users it is assuming to test.
As an online marketer, your responsibility is to understand your target audience with all their differences (segments).
An article at SmartInsights.com put it this way:
Differentiated campaigns with messages optimized to resonate with a particular segment are more effective than a generic message aimed at everyone but speaking strongly to no one.
You are not just selling your product to a general audience. You are selling your product to different segments. It’s time to understand those segments, and conduct split tests accordingly.
Segmented split testing produces specific results. And specific results are successful results.
On the obvious flip side, when you understand your customer base to be made up of segments, and then conduct split tests based on those segments, your findings are solid gold.
That’s the order.
That’s the recipe for success.
Split testing has not yet reached its maturity. Many corporate marketers and digital experts are still trying to understand the whole idea behind split testing, and trying to make sense of the data and their test results.
Yes, testing can do more harm than good.
That’s why I’m proposing, in general, a more strategic approach to testing. And then I’m proposing, in addition, a segmented approach to testing.
We’ve all heard the awesome success stories of the guy who changed a button color, experienced a million-fold increase in conversions, and is now living on a private island drinking limonada from a cup with a toothpick umbrella in it.
Geez, if only your split test could hand you lux life on a platter like that.
But we’re not going to do that unless we first wrap our minds around our customer segments, run split tests in accordance with those segments, and then take truly relevant actionable data.
(And, sorry, but private island prices these days are pretty steep.)
If you’ve been able to score some big wins without segmented testing, then you’re awesome and I respect you. However, I want to assure you that you can go farther, test better and become even awesomer when you start segmented testing.