Sponsorship Marketing

Experiment Traps: 5 signs that your business experiment isn’t actually an experiment at all

Part of SeriesC’s Statistically Speaking series Over the past 40 years, the Harvard Business Review (HBR) has studied how companies conduct business experimentation and they often find that companies fail to learn from their tests because they never adopt the true discipline of experimentation.

Using J.C. Penney’s costly and disastrous 2012 overhaul as a key example, HBR pointed out that ­– had CEO Ron Johnson established a proper set of experiments to test his ideas to do away with coupons, double down on upscale brands, and use technology to eliminate cash registers – he might have discovered how customers would revolt and push store sales down by 44% that year.

Too often these days we hear business leaders in CEO and CMO roles declare that they need to “test their hypothesis” or “run an experiment” in hopes of discovering whether a new business model or product will succeed. The trouble is, they don’t actually form solid hypotheses or conduct experiments correctly. The right way to experiment involves five scientifically sound steps: form a specific hypothesis, identify the precise independent and dependent variables, conduct controlled tests in which you can manipulate the independent variable, and then do careful observation and analysis of the effects, leading you to actionable insights. If you follow the steps, they’ll always present you with a valuable answer. So, where do many seemingly smart companies go wrong when it comes to business experimentation?

HBR posits that businesses can fall down at various stages when running a business experiment. Here, we’ve taken HBR’s Checklist for Running a Business Experiment and included what we’re calling Experiment Traps that you should recognize and avoid throughout the process:

  1. Purpose – HBR asks: Does the experiment have a clear purpose?
    1. The Hypothesis Hypocrisy Trap – did you and your management team agree that a test was the best path forward? Why? Is your hypothesis specific and straightforward (A good hypothesis clearly identifies what you think will happen based on your "educated guess" ­– what you already know and what you have already learned from your research)? If not, you’ve already fallen into the biggest experiment trap: Hypothesis Hypocrisy
  2. Buy-in – HBR asks: Have stakeholders made a commitment to abide by the results?
    1. The Cherry-Picking Trap – are you entering into this experiment equally prepared to be delighted or disappointed in the results? Will you avoid the temptation to cherry pick results that support your preformed ideas? Avoid this trap by sitting down and agreeing how your company will proceed once the results come in. If you see the experiment as part of a larger learning agenda that supports the company’s overall strategy, then you’re off on the right foot.
  3. Feasibility – HBR asks: Is the experiment doable?
    1. The Unsound Trap – HBR says “experiments must have testable predictions” but complex business variables and interactions or ‘causal density’ can “make it extremely difficult to determine cause-and-effect relationships.” Avoid this trap by knowing your numbers. Start by figuring out if you have a sample size large enough to average out all the variables you’re not interested in. Without the right sample size, your experiment won’t be statistically valid. Engage SeriesC’s analytics team to help you determine the right sample size for your experiment.
  4. Reliability – HBR asks: How can we ensure reliable results?
    1. The Corner Cutting Trap – when conducting your experiment you’ll be faced with challenges of time and cost and other real-world factors that can affect the reliability of your test. Resist the pull to cut corners by adopting proven methods from the medical field, like randomization, control groups and blind testing, saving you time in the design of your experiment and producing more reliable results. Or tap into big data to augment your experiment so you can better filter out statistical noise and minimize uncertainty.
  5. Value – HBR asks: Have we gotten the most value out of the experiment?
    1. The Wrong Impression Trap – don’t go to the trouble of conducting an experiment without considering and studying not only the correlations – the relationship between one variable and another – but also the causality. Causality helps us to understand the connectedness of certain causes and effects that usually aren’t as immediately obvious. Make sure to spend just as much time analyzing the data from your experiment as you did setting it up and executing it.

The bottom line: why go with gut and intuition and past experiences that aren’t apples-to-apples when you could be informed by relevant and tested knowledge? Steer clear of these experiment traps in your process and you’ll avoid inefficiency, unnecessary costs, and useless results. Embrace the proper process and you’ll learn something valuable, increasing your chances of success. Statistically speaking.

Avoid these experiment traps

A Lesson From SXSW: Are You Getting the Best Return on Your Marketing Investment?

Viral MarketingOver the several days in March that compose the inundation of marketing materials, sales pitches, parties, and events that seem to embody South by Southwest today, everyone is trying to squeeze the most visibility, the most contacts, the highest return for their marketing dollar. Some are excellent, some not so much. One differentiating factor that seems prevalent between the memorable and the forgettable are the level to which those marketing efforts are tied into a larger theme, campaign, and number of reinforcing touch points a prospective customer has with the material.

Let me provide an example of one of the more forgettable efforts: A company sponsored a shuttle bus to ferry the hordes of tech denizens around the several square blocks encompassing the convention center and its surroundings during the entirety of SXSW. The bus clearly displayed the company's branding and a colorful selection of attractive images. However, no advertisement about this bus appeared on the company's website, nor even a URL on the bus itself. Company representatives were not on hand to talk about the company or its products, and no means of gathering information from potential customers was provided. There wasn't even any indication of a corresponding event, booth, talk, or location where one could learn more or build on the experience. The most I got from the bus was a free ride, and the only reason I investigated their website was to write this post.

On the other hand, another company had opted to build a large sign / art installation in the middle of a public area to attract attention. This art installation had a constantly revolving, but never overwhelming, set of company representatives on hand to provide an explanation of the installation, the company, its mission, product, and goals (as well as to collect email addresses of interested potential customers). Cards were on hand to be taken away by visitors, and each of these elements were tagged to direct people to the company's larger booth in the convention center, thus ensuring an opportunity for follow up. Finally, the whole exhibit found prominent placement on the company's website for the duration of the convention.

In both cases, the major costs for these marketing efforts are associated with this public signage, but the incremental cost of attentive messaging, a bit of dedicated manpower, and some cheap printing transformed what is essentially a basic billboard into an active campaign that can deliver real and quantifiable results.

Marketing initiatives, especially at a place so deeply saturated with corporate messaging, cannot exist as standalone entities - it is only through combination with multiple channels of interaction and involvement - by aligning the full force of your public presence behind those campaigns, that their true value can be unlocked.