Quantitative studies are popularly lauded as much more concrete, scientific and compelling than qualitative studies. The simple fact is that proper quantitative research can be quite expensive, requires very controlled conditions and often lacks the depth or specificity qualitative studies can provide — at least in regards to how small and medium-sized research teams often wield them.
The reason for pointing out this vulnerability is not to suggest one study type over another, but rather to point out that quantitative studies’ reputations for infallibility is not entirely deserved. Research teams will likely encounter these three following issues during the course of their quantitative study. Fortunately, there are ways to work around them as long as teams are aware of the risks and respond accordingly.
Number Fetishization
89% of statistics can be misleading. Yes, that number was made up on the spot, but statements like that can be quite compelling when inserted into press releases or marketing literature.
Beware similarly focusing too deeply on one number or statistical outcome because it can quickly lead you astray. Keep in mind that the accepted 95 percent statistical certainty means there is still a 1 out of 20 chance that the data you collected was simply a random pattern.
For example, a study indicating that subjects overwhelmingly chose blue cups over red ones does not indicate that a brand should move wholesale towards blue designs. Findings like blue preference must be tested with many other controls and assembled into an experimental framework along with other variables like size, design, etc. so that each conclusion builds off the other. Ideally, studies are repeated to rule out the possibility that the first results were simply random data noise. Also, remember the importance of large sample sizes.
Falsely Compelling Correlations
95 percent confidence means that in a test with 7 metrics and 21 potential correlations, at least one false correlation could appear. Some correlations also emerge quite unexpectedly, as illustrated by the humorous website “Spurious Correlations.”
Look for indicators of correlation beyond statistical strength. One such measure is gradient, which suggests that a correlation between A stimulus and B event is more likely if a greater use of A causes more B. You can also simply try to reproduce your results.
Most importantly, see if you can find evidence of the causal relationship by looking to other studies and research papers that have dealt in similar areas. Always be tentative with your conclusions and try and back them up with other studies before declaring them too loudly.
Beware Novelty Bias
Journalism has “man bites dog,” and medicine has zebras. “Zebras” speak of the overwhelming temptation to be drawn towards more exotic and unusual conclusions as opposed to considering the more common and likely explanation first. “Man bites dog” talks about how uncommon events are reported with greater frequency, skewing public perception to think that such events are in fact more common than they actually are.
Both of these temptations can affect the conclusions you draw with a study and how you use them. Temper excitement at unusual or exotic statistical findings, and acknowledge both internally and publicly that your unusual results could be erroneous since others have never or have rarely reproduced them before.
Learn More About Quantitative Data Collection in Our Free Guide!
You can avoid the mistakes above as well as many others encountered during research by consulting our helpful video on avoiding common market research mistakes.