Wednesday, November 10, 2010


Trusting results of product concept research

Any quantitative study that tests purchase interest in a new or unfamiliar product concept will produce results that are likely to be questioned or doubted by even those who designed the study. At my current employer, the Market Research team has made great strides and taken considerable effort to try to normalize the variance that occurs in estimated take-rates from product to product, study to study. We've done this by:

  • Creating standardized wordings for questions capturing interest and purchase intent;
  • Creating standardized scales for questions concerning agreement, likelihood, satisfaction, and switching; and,
  • Retaining archival data results from previous studies to make benchmark comparisons.
Despite this, the fickle nature of telephone and web-based survey audiences remains a hobgoblin of professional consumer researchers. Twenty years ago, response rates on telephone surveys would easily surpass 35% or 40%, where now they are fortunate to break 20% (not to mention that 18% or more of the population no longer has a land-line telephone). Ten years ago, response rates on web-based surveys would commonly be in the 5% to 8% range, where now it is not unusual to obtain less than a 2% response rate (not to mention concerns that many panels are stacked with "professional respondents"). Frankly, despite all of our efforts to be consistent with data and our expectations of that data to guide insights, the changing world makes it more and more difficult to obtain "reliable" measures that uniformly track the general population.

But, we learn certain compensatory tricks and caveats. For example, we know that consumers typically under-report common daily activities (e.g., time spent watching TV will often get reported around 22 to 24 hours per week, but when the actual "people meter" is switched on by Nielsen, it's typically closer to 31 or 32 hours per week). Conversely, consumers will over-report infrequent activities (e.g., we're seeing consumers over-report online long-form video viewing, but ethnographic studies that observe actual waking-to-bedtime behavior suggest that this activity is potentially over-reported by a factor of 5x to 8x).

A recent study at hand regarding a new sports-related product/service has returned a mountain of data, based off a questionnaire that was carefully designed and vetted by both some of our company's and the vendor's best personnel.

We knew going into this research that the presentation of such a multifaceted product would likely require a video format to convey all of the features to the respondent. On the other hand, most of our new product concept testing does not enjoy the benefits of a glossy video presentation, so some of the "benchmark" data loses its comparability. People tend to more warmly embrace a concept that's been presented to them in a stimulating, engaging way (such as a video clip) than when they're presented with only words on a page.

So, when the results came back, showing a rather strong interest in the concept, it didn't take long for us to begin wondering if it was the slick presentation of the concept (compared to other, more typical presentment formats) that gave it an edge against benchmarks.

In this particular case, we had also asked some "true or false" questions to gauge whether or not the respondents truly understood what the product offered, or whether they had gotten carried away with imagined promises of delivered benefits. We concluded that at least two-thirds of the respondents really had a good grasp of the concept (getting three out of three of the true/false questions correct), and so that helped ease everyone's concerns about potentially inflated take-rates.

What do you do in your organization when you encounter research situations such as this?

Labels: ,

6 Comments:

At 11:15 AM, November 13, 2010, Anonymous Anonymous said...

VERY informative -- thank you for your great insight.

 
At 3:15 PM, December 06, 2010, Anonymous Brian said...

You expertly point out issues that we confront daily. Participant engagement and the duration and validity of findings are difficult challenges to address.

We tackle the participant engagement through both a video-based survey tool and a participant web-based community. The survey tool uses video to test the piece of content itself, an object within the video and/or a situational experience. The community space keeps the participants engaged - we needed just one re-recruit during a recent 13-week study - because we are able to mimic the real-world while allowing user-generated uploads and peer-to-peer interaction.

The duration of a study’s usefulness, the second piece, goes to the heart of what we believe. We structure our studies to identify and articulate underlying motivation and needs. Technology is changing more rapidly, particularly with the introduction of the iPad, that a traditional study done a few months prior is irrelevant. However, if you uncover the fundamental motivations and needs, a company can adapt more quickly despite the shifts in the surrounding environs.

Sorry for the long comment!

Brian

 
At 10:39 PM, December 13, 2010, Blogger Sean Copeland said...

I work for a company where these issues have been resolved 99% of the time. Measuring PI is only useful when you have a comparative concept with actual market data to back it up. Low response rates can be compensated for by making questions easier for people to answer, either by the method of delivery or the method of execution.

I'll be blogging about these types of things every week on my blog "The New Market Research". Check it out when you have a moment: http://www.seancopeland.me/

Thanks,

Sean Copeland

 
At 12:53 AM, January 25, 2011, Anonymous Sam said...

It is now old fashion to attract customers through telephonic calls. conversion rate of telephonic calls is less than 2 percent. great post comparing old telephonic conversion rate to the existing new rate.

 
At 3:40 AM, March 30, 2011, Blogger Darshan said...

It’s easy to look at the results of market research through biased eyes. Choose the right analytical tools and look for statistically significant findings from quantitative research and common key insights from qualitative....

 
At 5:36 AM, February 28, 2014, Blogger Unknown said...

"The fickle nature of telephone and web-based survey audiences remains a hobgoblin of professional consumer researchers" This concept is suitable for technology in general, since the strategy of each company based on demand and customer needs. In swot analysis report identified key data that show how it affects the operations and profitability of the company.

 

Post a Comment

<< Home