Wednesday, November 10, 2010


Trusting results of product concept research

Any quantitative study that tests purchase interest in a new or unfamiliar product concept will produce results that are likely to be questioned or doubted by even those who designed the study. At my current employer, the Market Research team has made great strides and taken considerable effort to try to normalize the variance that occurs in estimated take-rates from product to product, study to study. We've done this by:

  • Creating standardized wordings for questions capturing interest and purchase intent;
  • Creating standardized scales for questions concerning agreement, likelihood, satisfaction, and switching; and,
  • Retaining archival data results from previous studies to make benchmark comparisons.
Despite this, the fickle nature of telephone and web-based survey audiences remains a hobgoblin of professional consumer researchers. Twenty years ago, response rates on telephone surveys would easily surpass 35% or 40%, where now they are fortunate to break 20% (not to mention that 18% or more of the population no longer has a land-line telephone). Ten years ago, response rates on web-based surveys would commonly be in the 5% to 8% range, where now it is not unusual to obtain less than a 2% response rate (not to mention concerns that many panels are stacked with "professional respondents"). Frankly, despite all of our efforts to be consistent with data and our expectations of that data to guide insights, the changing world makes it more and more difficult to obtain "reliable" measures that uniformly track the general population.

But, we learn certain compensatory tricks and caveats. For example, we know that consumers typically under-report common daily activities (e.g., time spent watching TV will often get reported around 22 to 24 hours per week, but when the actual "people meter" is switched on by Nielsen, it's typically closer to 31 or 32 hours per week). Conversely, consumers will over-report infrequent activities (e.g., we're seeing consumers over-report online long-form video viewing, but ethnographic studies that observe actual waking-to-bedtime behavior suggest that this activity is potentially over-reported by a factor of 5x to 8x).

A recent study at hand regarding a new sports-related product/service has returned a mountain of data, based off a questionnaire that was carefully designed and vetted by both some of our company's and the vendor's best personnel.

We knew going into this research that the presentation of such a multifaceted product would likely require a video format to convey all of the features to the respondent. On the other hand, most of our new product concept testing does not enjoy the benefits of a glossy video presentation, so some of the "benchmark" data loses its comparability. People tend to more warmly embrace a concept that's been presented to them in a stimulating, engaging way (such as a video clip) than when they're presented with only words on a page.

So, when the results came back, showing a rather strong interest in the concept, it didn't take long for us to begin wondering if it was the slick presentation of the concept (compared to other, more typical presentment formats) that gave it an edge against benchmarks.

In this particular case, we had also asked some "true or false" questions to gauge whether or not the respondents truly understood what the product offered, or whether they had gotten carried away with imagined promises of delivered benefits. We concluded that at least two-thirds of the respondents really had a good grasp of the concept (getting three out of three of the true/false questions correct), and so that helped ease everyone's concerns about potentially inflated take-rates.

What do you do in your organization when you encounter research situations such as this?

Labels: ,