Many studies on usability, online A/B testing, market research and countless other areas draw conclusions when, frankly, they should not be. Why not? Their sample size is too low.
A lack of research subject recruiting leaves them with a paltry number of people that make the conclusions unreliable. Small sample sizes skew data by making one-time or limited occurrences seem more common than they actually are. Similarly, relatively common occurrences may not show up at all during the study.
The numbers behind this phenomenon are kind of complicated, but often a small sample size in a study can cause results that are almost as bad, if not worse, than not running a study at all.
Usability Misconceptions
○ A study of 30 (n=30) that finds zero failures indicates that the failure rate is still around 11.6%
○ With n=100, the failure rate is still 3.6%
○ When n=1000, the true failure rate is 0.36%
The Bare Minimum
○ A confidence rate of 95%
○ A small margin of error of +/- 5%
○ A moderate standard of deviation of 0.5, where responses are generally not split dramatically from one another (think of a standard bell curve)
Despite these statistical assertions, many studies think that 100 or even 30 people is an acceptable number. Studies that want to draw segment conclusions — e.g. “people from Milwaukee liked our product better than those in Chicago” — will need a much higher number than they expect of each demographic.
If finding this many people and keeping the sample size relatively random seems daunting, remember that companies like CFR Inc. have your back. We help studies recruit people from all over the country in enormously varied segment groups using reliable, eager subjects.
Visit our research subject recruiting page to find out more about we can help you.