Thursday, October 27, 2005


How large should my sample be?

Before we launch a marketing research study or public opinion survey, we're often asked by the client, "How big should the sample be?", or "How many people do we need to interview for the results to be valid?" Public relations firms are especially prone to ask these questions, since they appreciate the value of research for ink, but don't often excel in sampling design issues. We don’t have a “standard” answer, because there is no universal “standard” for sample sizes and error tolerance. It quite literally is a subjective preference, based on custom, budget, and the consequences of the findings.

Statisticians, media editors, and business stakeholders alike can discuss and argue what is an “appropriate sample size”, and none would be wrong, and all may be right. Depending on different trades (epidemiology versus public relations, for example), you will hear different “minimum” sample numbers. Likewise, depending on different cultures, you will hear various “minimum” sample sizes -– often based on nothing more than a psychological comfort zone. For example, in the United States, some media content providers prefer to accept only consumer sample sizes of at least 400. Why? Merely because findings that center near 50% of the sample response have a margin of error of no more than +/- 5% (a nice round number) at the 95% confidence level (another nice round number). However, if you asked a pharmaceutical company if this would be acceptable for the test of a new immunization treatment, they might look at you in shock. Not to be too flippant, but the desired accuracy of a sample size can vary with how much the resulting data will result in a life-or-death consequence.

In other cases, smaller sample sizes would have to be gladly accepted by the media or business stakeholders. For example, if a study about the future of NASA were to be conducted among American astronauts who have ever been in space, a sample size of 35 would probably be considered quite impressive in its coverage of the very limited and difficult-to-reach population universe.

That being said, business organizations make very important tactical and strategic decisions all of the time, based on research data that covered only 100, or 50, or even 30 people. They may take away “directional” learning from data that has a margin of error of +/- 9% at the 90% confidence level. Indeed, the City of Austin, Texas publicizes that the norm for sufficient statistical validity of their water load research for the City was indicated to be "90/10", i.e., at the ninety percent confidence level, a maximum of ten percent margin of error. Based on industry standards and published experience for similar applications, the 90/10 criteria could easily be achieved with a sample of only 100 respondents. So, what is good enough for the City of Austin is perhaps not good enough for another client, or perhaps it is. Again, neither is absolutely correct, and neither is absolutely wrong. It is a matter of needs, budget, consequences, and preference.

Cost is an important factor to consider when determining a sample size. If the “ideal” sample size and design methodology don’t fit a budget or timeline, then trade-off decisions are going to be necessary, some of which may compromise the quality and scope of the research. In one example, by surveying 225 American workers, ICR achieved a sample tolerance of:

* Margin of error no more than +/- 4.27% at the 80% level of confidence
* Margin of error no more than +/- 5.48% at the 90% level of confidence
* Margin of error no more than +/- 6.53% at the 95% level of confidence
* Margin of error no more than +/- 7.75% at the 98% level of confidence
* Margin of error no more than +/- 8.59% at the 99% level of confidence

Had our client wished to cut these margins of error in half, then the sample would have had to be increased four-fold, to 900 respondents, and costs would have nearly tripled. Which of these levels of confidence was “necessary”? In our opinion, it is hardly objectionable to state that all of our U.S. findings (based on total qualified respondents) would be accurate (that is, reflect the “true” opinion of the entire population which was sampled) to within 4.3 percentage points or less, on at least 8 out of 10 independently sampled outcomes. If this standard would be rejected by our client or the media, to the preference of exclusively samples of 500, 900, 1,000, or more, we would contend this would represent a research solution that may be unnecessarily large and unnecessarily expensive, considering the survey topic was about financial investment matters, and not the effects of a tainted pharmaceutical remedy on an at-risk patient population. We recognize the challenge of overcoming industry-specific customs and local best practices, but my company stood behind this research as perfectly valid, within the tolerances indicated above.

So, in summary, I guess the answer to our title question is actually another question: "What's your tolerance for error?"


Tags: , , , , ,

2 Comments:

At 7:35 AM, November 07, 2005, Anonymous Anonymous said...

Perhaps you are familiar with Mlive.com (everything Michigan).
In order to log on to the site
one must provide their zip code,
year of birth and gender. Being
an ornary old man, I respond with
65432 1987 F instead of the true
32792 1933 M. Of course, by not
deleting my cookies, my erroneous
response is perpetuated.

What use is made of the requested
information? "So we can know more
about our readers." they say.
Must have a high tolerance for
error, wouldn't you say?

No, I don't feel guilty for
completely distorting their
survey results.

 
At 11:16 AM, November 09, 2005, Blogger Gregory Kohs said...

You are not alone, W. Morrell. In fact, there is an organization devoted to the notion that these age/sex/location queries are completely pointless, and they advise filling in fictitious info ALWAYS.

Learn more at http://www.bugmenot.com/

 

Post a Comment

<< Home