When preparing to conduct a research study, your research provider may use some terminology with which you’re unfamiliar. Here are explanations of some of the most commonly used terms and concepts in the research business.
Quantitative studies collect statistically valid data from large respondent samples. Usually conducted via telephone, mail, or email with a carefully crafted survey instrument, their primary function is to confirm or disprove preliminary assumptions, insights, or ideas to drive informed decision-making and appropriate courses of action. Unlike other forms of research, the numerically calculated results of a quantitative study with a large enough number of respondents can be projected onto the market as a whole with an acceptable level of confidence.
On the other hand, qualitative studies, typically conducted via focus groups or in-depth interviews, explore participants’ emotional and rational reactions to, perceptions of, and attitudes toward a concept, product, or service. They can also reveal in-depth information on the product/service features most likely to drive increased usage. The exploratory nature of qualitative studies makes them valuable for developing initial insights and providing direction for further research. Due to the large amounts of open-ended data collected from smaller subsets of people, qualitative research requires a subjective interpretation of the data, which cannot be meaningfully quantified.
The word “population” refers to the entire universe of people targeted for the research. In short, it’s your target audience for the study. It’s often necessary to collect different information from different segments of the population, such as varying titles, decision-makers, influencers, and such.
A sampling frame is a group that is representative of the population to be surveyed. For example, let’s say you want the opinions of federal IT decision-makers. A possible list source would be a federal government publication subscriber list, such as FCW. The subscriber list would be your sampling frame that you use to make inferences about the federal decision-maker population.
Sample size refers to the number of respondents in the sample who actually participate in a survey. This number is very important in determining the confidence interval in quantitative research studies.
You may recall hearing the plus-or-minus margin of error figure when opinion poll results are reported in the news. Derived from a formula based on the size of the total population as compared to the size of the sample surveyed, it defines how accurately the research results can be applied to the study’s population as a whole. For example, let’s say out of a total population of 25,000 federal program managers, you survey a small subset of 350, giving you a margin of error of +/- 5%. If 70% of the 350 people interviewed say they prefer X provider of networking services, given the margin of error of +/- 5%, the confidence interval tells us we can be reasonably confident that the preference of the entire population will fall between five percentage points above or below the preference rate of the sample. Therefore, the confidence interval indicates you can be reasonably confident that anywhere between 65% and 75% of the entire population would report the same preference of networking services if surveyed.
Screeners or filters are questions that qualify respondents for subsequent questions or ensure that the survey is within their realm of experience. For example, if you are attempting to reach federal decision-makers, you should have a screener question that ensures you’re speaking to a federal government employee and that the person has at least some level of decision-making responsibility relevant to your offering.