In Standard A/B Testing, Winner Takes All Optimization Can Be a Dangerous Practice

opinionThe following is an exclusive guest contributed post to MMW from Carl Theobald, CEO of FollowAnalytics.

With the U.S. presidential election just around the corner, there’s one safe prediction that can be made.  Some portion of the country is going to be unhappy about the results.  As with any contest where there can only be one outcome, like it or not, some participants will have to accept the will of the majority.  If you trust the polls, this election is going to be pretty close.  A lot of people are going to feel marginalized.

But imagine a system where each party got the candidate they voted for.  Republicans could be governed by the Trump administration.  Democrats would have Clinton as their president.  Third parties would even get Gary Johnson or Jill Stein. In this fictional model, each group would live in their own custom America with government that best met their needs and values.

Obviously, this can’t work in politics.  Such a system would mean effectively dividing us into separate countries whose boundaries are drawn by political affiliation.  But not every test has to have a binary outcome.  Rather than winner takes all, in marketing, the winning content should go to the segment it wins.

By now, data-driven marketing is fairly mature, especially in B2C.  We know to test everything and not just create and deliver content based on gut impulses.  But when most of us run tests, the content delivered is the outcome of a simple majority.  In other words, if I have two messages, A and B, and message A wins the test by 51%, message A is what gets delivered to the entire audience.

The flaw here is that 49% of my audience gets content that is targeted inappropriately.  49% is a tremendous amount of potential customers to ignore.

Moreover, real world scenarios are far more complex than the example above.  You should be testing several variants, not just two.  Now span all of those variants against number of possible segments.  For example, if the results of your test look like this, then the content you serve up goes to the winner for each segment.  Here, coffee drinkers would get green buttons, juice would get red and tea drinkers would get blue.


Now, rather than alienating some of your audience, you can send the content optimized to force a conversion event to everyone.

If this seems like a lot of work, then you are right.  Without a team of data scientists, it’s hard to expect marketers to have the time and resources to run tests against what could be an infinite number of possibilities.

The solution lies in automation.  A testing platform should be able to slice and dice the audience on the backend to help the marketer define their segments, and in some cases, identify groups the marketer hadn’t thought of.  In fact, it should be able to identify every possible segment in your database, and then suggest those that will produce the most statistically significant results.  It should then test content variations against samples of all of these segments, therefore predicting which variation will perform best for every group.  Now the marketer can be confident that their messaging is truly optimized for the best possible performance.

If this sounds like science fiction, think again. Machine learning and predictive intelligence are giving brand marketers the ability to to do exactly that.  So while we can’t promise that every American gets the presidential candidate they vote for, we can be sure that every mobile marketer can speak the language that is most likely to convert every member of their audience, rather than just a majority.  And that’s pretty extraordinary!