Timothy McAuliffe

Okay, so all (successful) digital marketers are in agreement: Optimization Is Essential. Everywhere you look, you’ll find sound advice to relentlessly test creative, update your keyword bidding strategy, measure email open rates, multivariate test your landing pages … there is no lack of evidence that optimization will help increase your marketing ROI. But how much to optimize, that’s a different matter altogether.

What is over-optimization?

At some point during the setup of an optimization campaign, questions are typically asked like, “How often do you optimize your keywords?” or “How long does it take to do an A/B split on my landing page?” The answer is always the same: let the data decide. Don’t pigeonhole your results to a week if it takes two weeks to gather enough data to complete your test. Rushing results has the uncanny effect of turning into over-optimization – too many changes made too quickly to a campaign affects the delivery of accurate results, like ending a test before statistical significance has been realized. Aside from the unintended consequences of being penalized by Google or confusing the Google algorithms with constant updates, previously isolated variables become congealed into a chaotic mix of data soup. It becomes nearly impossible to tell why a Paid Search campaign had a 35% drop in volume, or a landing page had a 12% lift on Tuesdays after 3 p.m.

Without knowing why or what changes influenced the results, the iterative process of optimization is dead in the water. For marketers, understanding causality is crucial to continually achieving KPI goals.

How to avoid over-optimization

Proper marketing optimization should start with target KPIs, which should be defined up front in a measurement plan. Knowing the marketing goals helps determine what to test, how much to test, and when to stop testing. It also helps set expectations on results and timing. Changing any variables outside of the test that might affect the outcome should be avoided or else KPIs may find themselves going in the opposite direction in the long term.

For example: The goal is to increase customer lifetime value by 10% in the first quarter. The first step would be to define what variables could be reasonably be tested (like CRM messaging), choose a testing methodology (A/B or multivariate), and set a statistical significance designation (+/- 3%). As the campaign goes on, changing the price of the product to increase conversions in parallel would be analogous to dropping a nuclear warhead on the CRM test. Result: conversion increases 5% … customer lifetime value drops 30% … kaboom.

To avoid a devastating blow, properly defining the goals up front followed by a comprehensive testing plan and concluding with statistical significance will ensure that you don’t get caught in the trap of over-optimization.

Contact to keep your brand far away from the trap of over-optimization.