Part of SeriesC’s Statistically Speaking series
Over the past 40 years, the Harvard Business Review (HBR) has studied how companies conduct business experimentation and they often find that companies fail to learn from their tests because they never adopt the true discipline of experimentation.
Using J.C. Penney’s costly and disastrous 2012 overhaul as a key example, HBR pointed out that – had CEO Ron Johnson established a proper set of experiments to test his ideas to do away with coupons, double down on upscale brands, and use technology to eliminate cash registers – he might have discovered how customers would revolt and push store sales down by 44% that year.
Too often these days we hear business leaders in CEO and CMO roles declare that they need to “test their hypothesis” or “run an experiment” in hopes of discovering whether a new business model or product will succeed. The trouble is, they don’t actually form solid hypotheses or conduct experiments correctly. The right way to experiment involves five scientifically sound steps: form a specific hypothesis, identify the precise independent and dependent variables, conduct controlled tests in which you can manipulate the independent variable, and then do careful observation and analysis of the effects, leading you to actionable insights. If you follow the steps, they’ll always present you with a valuable answer. So, where do many seemingly smart companies go wrong when it comes to business experimentation?
HBR posits that businesses can fall down at various stages when running a business experiment. Here, we’ve taken HBR’s Checklist for Running a Business Experiment and included what we’re calling Experiment Traps that you should recognize and avoid throughout the process:
- Purpose – HBR asks: Does the experiment have a clear purpose?
- The Hypothesis Hypocrisy Trap – did you and your management team agree that a test was the best path forward? Why? Is your hypothesis specific and straightforward (A good hypothesis clearly identifies what you think will happen based on your "educated guess" – what you already know and what you have already learned from your research)? If not, you’ve already fallen into the biggest experiment trap: Hypothesis Hypocrisy
- Buy-in – HBR asks: Have stakeholders made a commitment to abide by the results?
- The Cherry-Picking Trap – are you entering into this experiment equally prepared to be delighted or disappointed in the results? Will you avoid the temptation to cherry pick results that support your preformed ideas? Avoid this trap by sitting down and agreeing how your company will proceed once the results come in. If you see the experiment as part of a larger learning agenda that supports the company’s overall strategy, then you’re off on the right foot.
- Feasibility – HBR asks: Is the experiment doable?
- The Unsound Trap – HBR says “experiments must have testable predictions” but complex business variables and interactions or ‘causal density’ can “make it extremely difficult to determine cause-and-effect relationships.” Avoid this trap by knowing your numbers. Start by figuring out if you have a sample size large enough to average out all the variables you’re not interested in. Without the right sample size, your experiment won’t be statistically valid. Engage SeriesC’s analytics team to help you determine the right sample size for your experiment.
- Reliability – HBR asks: How can we ensure reliable results?
- The Corner Cutting Trap – when conducting your experiment you’ll be faced with challenges of time and cost and other real-world factors that can affect the reliability of your test. Resist the pull to cut corners by adopting proven methods from the medical field, like randomization, control groups and blind testing, saving you time in the design of your experiment and producing more reliable results. Or tap into big data to augment your experiment so you can better filter out statistical noise and minimize uncertainty.
- Value – HBR asks: Have we gotten the most value out of the experiment?
- The Wrong Impression Trap – don’t go to the trouble of conducting an experiment without considering and studying not only the correlations – the relationship between one variable and another – but also the causality. Causality helps us to understand the connectedness of certain causes and effects that usually aren’t as immediately obvious. Make sure to spend just as much time analyzing the data from your experiment as you did setting it up and executing it.
The bottom line: why go with gut and intuition and past experiences that aren’t apples-to-apples when you could be informed by relevant and tested knowledge? Steer clear of these experiment traps in your process and you’ll avoid inefficiency, unnecessary costs, and useless results. Embrace the proper process and you’ll learn something valuable, increasing your chances of success. Statistically speaking.