Conventional wisdom from the Information Architecture and User Experience gurus tells us that buttons and calls to action should be the same across the user experience. The thinking is that messaging, shape, size and color are all elements that if consistent will allow users to locate and click buttons and calls-to-action more easily and more often. This is wrong. The design and messaging of buttons can be optimized at each step throughout the conversion funnel.
At OTTO Digital we have been using Offermatica to do a lot of multivariate testing on buttons in recent months. We’ve learned a lot of things and have seen some tremendous lifts in CTR, conversion and especially revenue-per-visitor (RPV). Some of our learning turns conventional wisdom on its head. We like that.
What have we learned?
Different Buttons Perform Better on Different Pages:
To me this data is groundbreaking. It proves that the “best practice” of consistency in button design for improving user experience and increasing conversion rate is not optimal. Users are skilled enough to recognize “intelligent inconsistency” throughout the purchase path.
User goals and objective changes from page to page in-flow. The pages themselves change layouts and goals. It is therefore only logical that the buttons may benefit from being changed as well. It’s just so cool that as marketers we have technology that can define a best button from hundreds of variations in the matter of a few weeks.
Buttons Impact Revenue Per Visitor (RPV):
Don’t underestimate the power of these little designs. Most of the elements at some point in our testing have shown tremendous factors of influence on revenue, especially the color red. I know I caused a bit of a stir with some in the optimization community when I mentioned that red buttons should be a rule of landing pages. I will retract a bit and say that the only “rule” should be that you test. Occasionally we see other colors perform better. But you know what? I’ve got reams of data the past few months to back up that red buttons rock. Keep in mind when you are testing for revenue you should filter out extreme orders from the data so as to not skew the results.
Impact Changes Throughout the Funnel:
Strategically you optimize each page to get users one-step closer to their objective. As such, CTR should be the first metric you measure. You also should be tracking performance metrics every step until purchase or lead. One recent product page test had a best performing message of “buy now”. However one click deeper in the experience at the shopping cart step, the best messaging for that product page was “add to cart.” The performance of “add to cart” carried over to purchase and this button had an 8% lift in RPV vs. the control (existing) button. Did I mention this button was red?
Having run many MVT tests on buttons I can tell you that I’ve seen each of these factors be an overwhelming contributing factor to conversion.
• Action Design Pattern
Segmentation Yields Interesting Data:
Looking at results by segments can help inform strategies especially for targeting. We’ve seen different button behavior based on new visitor vs. return visitor, entry point and source. Target your buttons and you start to take optimization to a whole other level.
Button and call-to-action tests can yield some really great results. They should also not be a big strain on creative resources so they are perfect tests to get your feet wet with multivariate testing and optimization. Not to mention everyone loves testing buttons. It just makes sense.
Leave a Reply