Big Data

Experiment Traps: 5 signs that your business experiment isn’t actually an experiment at all

Part of SeriesC’s Statistically Speaking series Over the past 40 years, the Harvard Business Review (HBR) has studied how companies conduct business experimentation and they often find that companies fail to learn from their tests because they never adopt the true discipline of experimentation.

Using J.C. Penney’s costly and disastrous 2012 overhaul as a key example, HBR pointed out that ­– had CEO Ron Johnson established a proper set of experiments to test his ideas to do away with coupons, double down on upscale brands, and use technology to eliminate cash registers – he might have discovered how customers would revolt and push store sales down by 44% that year.

Too often these days we hear business leaders in CEO and CMO roles declare that they need to “test their hypothesis” or “run an experiment” in hopes of discovering whether a new business model or product will succeed. The trouble is, they don’t actually form solid hypotheses or conduct experiments correctly. The right way to experiment involves five scientifically sound steps: form a specific hypothesis, identify the precise independent and dependent variables, conduct controlled tests in which you can manipulate the independent variable, and then do careful observation and analysis of the effects, leading you to actionable insights. If you follow the steps, they’ll always present you with a valuable answer. So, where do many seemingly smart companies go wrong when it comes to business experimentation?

HBR posits that businesses can fall down at various stages when running a business experiment. Here, we’ve taken HBR’s Checklist for Running a Business Experiment and included what we’re calling Experiment Traps that you should recognize and avoid throughout the process:

  1. Purpose – HBR asks: Does the experiment have a clear purpose?
    1. The Hypothesis Hypocrisy Trap – did you and your management team agree that a test was the best path forward? Why? Is your hypothesis specific and straightforward (A good hypothesis clearly identifies what you think will happen based on your "educated guess" ­– what you already know and what you have already learned from your research)? If not, you’ve already fallen into the biggest experiment trap: Hypothesis Hypocrisy
  2. Buy-in – HBR asks: Have stakeholders made a commitment to abide by the results?
    1. The Cherry-Picking Trap – are you entering into this experiment equally prepared to be delighted or disappointed in the results? Will you avoid the temptation to cherry pick results that support your preformed ideas? Avoid this trap by sitting down and agreeing how your company will proceed once the results come in. If you see the experiment as part of a larger learning agenda that supports the company’s overall strategy, then you’re off on the right foot.
  3. Feasibility – HBR asks: Is the experiment doable?
    1. The Unsound Trap – HBR says “experiments must have testable predictions” but complex business variables and interactions or ‘causal density’ can “make it extremely difficult to determine cause-and-effect relationships.” Avoid this trap by knowing your numbers. Start by figuring out if you have a sample size large enough to average out all the variables you’re not interested in. Without the right sample size, your experiment won’t be statistically valid. Engage SeriesC’s analytics team to help you determine the right sample size for your experiment.
  4. Reliability – HBR asks: How can we ensure reliable results?
    1. The Corner Cutting Trap – when conducting your experiment you’ll be faced with challenges of time and cost and other real-world factors that can affect the reliability of your test. Resist the pull to cut corners by adopting proven methods from the medical field, like randomization, control groups and blind testing, saving you time in the design of your experiment and producing more reliable results. Or tap into big data to augment your experiment so you can better filter out statistical noise and minimize uncertainty.
  5. Value – HBR asks: Have we gotten the most value out of the experiment?
    1. The Wrong Impression Trap – don’t go to the trouble of conducting an experiment without considering and studying not only the correlations – the relationship between one variable and another – but also the causality. Causality helps us to understand the connectedness of certain causes and effects that usually aren’t as immediately obvious. Make sure to spend just as much time analyzing the data from your experiment as you did setting it up and executing it.

The bottom line: why go with gut and intuition and past experiences that aren’t apples-to-apples when you could be informed by relevant and tested knowledge? Steer clear of these experiment traps in your process and you’ll avoid inefficiency, unnecessary costs, and useless results. Embrace the proper process and you’ll learn something valuable, increasing your chances of success. Statistically speaking.

Avoid these experiment traps

For Big Data Innovators: Some Great New Small Data

The team at Big Data Republic and its partner SAP have released results of a survey to 200+ of their engaged enterprise community "to see how prepared organizations are to make use of big data in their operations."  The seven-page report has some strong detail, particularly about the healthcare, financial, and government sectors, pointing to the the limitations most enterprises still face as they get their heads, arms, and budgets wrapped around the possibilities of big data. This is the kind of report we love for entrepreneurs and innovators who are seeking to understand their market. If the big-data advance you're bringing to the world is promising "faster return of practical insight" as your value proposition, for example, you'd surely love to know that only 1.9% of the respondents would agree that your value proposition is how they define success.  The vast majority are still looking at "faster" as a deep horizon goal, grasping more immediately for the basics: cost savings, efficiencies, and just seeing what practical insights are there to be found. (See page 2.)

Or, if your company is questioning how much effort you should put into packaging professional services with your big-data product, it would help to know that only 9% said "we don't have the talent to make use of our data" is the biggest impediment holding them back -- and that number goes down to 6% among respondents who work in enterprises with 1,000 employees or more.  (See page 4.)

For me, the most helpful statistics come on pages 6 and 7, where the study covers how the C-suite is involved in big-data project decision making, how various market verticals believe they're doing, and how senior management perception of what's holding enterprises back differs from non-management perceptions.

What can you do when this kind of convenient study doesn't exist to give you market insight detail? Start asking questions. There's no substitute for talking to would-be buyers and users, from the C-level down to the lowest-level of end-user.  You don't need a formal survey of 200 respondents to start seeing patterns in the data. You don't need to be selling to have a conversation.