Piloting has been used by companies for decades to assess the efficacy of different tactics and to learn about their subscription commerce business. Some recurring revenue companies have become so proficient at executing pilots that they continuously test concepts, ideas, tactics, and business strategies.
However, despite the amount of piloting activity taking place, many managers still voice similar concerns and questions like: “Every time there is a pilot result discussion, it is like a groundhog day – same questions over and over. Am I doing something wrong? Why can’t I learn from my pilots?”
While there is no way to make pilot testing foolproof, extensive experience in the field has led Simon-Kucher to single out five core principles that are based on the most common pitfalls.
These guidelines certainly aren’t exhaustive. There are other aspects to consider when designing pilots (sample sizes, benchmarking metrics, system limitations, sometimes even corporate culture changes).
But hopefully these considerations can significantly improve the success and value of your pilots. Once mastered, pilots are a powerful tool to drive and educate strategy, pricing, marketing, and operations to help you accomplish your goals.
Some companies still use gut-feel to make decisions, they have never ventured to test hypotheses or actions. They view piloting as risky and are confused about implementation and reading the results to take concrete strategic actions. Their alternative is just to “design and roll-out”.
Pilots can be painful to implement and require careful planning. However, piloting can help validate the assumptions behind an initiative, allow you to understand where to tweak the initiative, and serve as a dry run for a full-scale roll-out. These things considered, there is high risk not to pilot.
There is such a thing as “getting too greedy” with pilots. A recent client was trying to re-frame their promotion strategy. To do so, they had many different levers they could pull: discount levels, list prices, promotion offer types, customer-specific discounts, etc. They brainstormed what combination these levers would work out the best, then piloted it.
When the pilot results came back, they found it impossible to pinpoint what drove the results and how. Was it the discounts level? Was it the offer type? Or a combination of both? This is not surprising – if you change too many things in an experiment, you won’t know what caused the change.
Eventually, we helped them design pilots that would test only one variable at a time. By testing wisely, they were able to estimate the impact of each individual lever and then design a smarter promotional strategy that incorporated what was learned.
One of the most frequently asked questions about pilots is around the infamous “control group”. Everyone knows that there needs to be one. However, what should this control group look like and how do we use it?
The most important aspect of a control group is its similarity to the pilot group. This will serve as a benchmark for this test (not necessarily for the rest of the market). The point of running a pilot is to determine the impact of an action on a sample that represents a population. Just make sure your sample represents the population that you’re testing.
Think you can extract learning and determine what you should, or should not roll-out after a one week pilot? Not necessarily. Customers need time to get trained and the first days of a pilot could reflect just a ramp-up stage, instead of a “steady-state behavior”. Typically, the customers that engage with your company are the most likely to react to a change – positively or negatively.
You need to keep a cool head and let the dust settle. Depending on purchase cycles and subscriber volume, pilots could run anywhere from a couple of weeks (for high-volume cycles) to months (for low-volume cycles or hard-to-execute actions). Waiting it out will show you which customers react differently to varying actions. This will dictate how long you should expect for things to settle when you roll out the full program.
Pilots are often designed to perfection, yet horribly implemented. For example, in the quest to understand why a pilot was not performing well in the clearance section of a retailer, I personally went to the store to see what was going on. I was (unpleasantly) surprised when the sales associate told me that I needed to hand her the clearance item so that she could scan it and then tell me how much it was.
Putting this extra hurdle on the customer was not faulty design, but rather a lack of communication. In hurrying to get pilots to launch quickly, this company didn’t communicate clearly to their sales associates regarding implementation. Have in mind that, in many cases, sales and customer service personnel will be the ones who communicate your action to your customers. If you do not give them clear guidelines and scripts for how to explain the action, they can get very creative (and not always in a good way).