Reading Results - At A Glance

We've covered a lot of ground here lately, but in fact we've only scratched the surface of ways to gather and analyze data. There are all kinds of distributions, formal tests, and complicated jargon out there, and there's no way we can cover it all. You would run screaming from reading it, we would run screaming from writing it, and no one would be any wiser.

Other people know this stuff, though, and we'd like to be able to make sense of the results they find. When people report their results, there are a few key pieces of information that they will always include. Or should always, at least.

The first thing people want to know is the test statistic and its value. We didn't mention it before, but we've been using test statistics this whole time. When examining the dogcatcher votes? The sample proportion was a test statistic. And when comparing the scaredy-cats? We used the difference between the means as our test statistic.

Basically, a test statistic is any number that we calculate from the data and compare to the null hypothesis. So yeah, that's kind of important. There are a lot more test statistics out in the wild, though, so keep on your toes. We'd hate for a wild test statistic to eat you whole.

Next we want to know about the sample size. Is it a wee-tiny sample, or a huge honking one? The more data used to create an estimate, the better. Too-small sample sizes make an estimate unreliable, like a Renaissance duke.

What makes for a "big" sample, though? It really depends on what's being studied. Someone looking at the sleeping habits of left-handed narcoleptics that were born in the third week of April will only be able to find a handful of people to sample, while a physicist may have hundreds or thousands of atom-smashing results.

The last thing we need is the P-value. Is it smaller than α, the significance level—meaning we can we reject the null hypothesis? A lot of the time, figuring this out is the whole point of collecting and analyzing the data. So yeah, that's kind of important, too.

There's one more thing to keep in mind. Often times, a thing called the 95% confidence interval will be reported. When you see "confidence interval," you need to think "margin of error." Or else you'll think, "What is that?" and start panicking.

Take deep breaths into a paper bag, and remember that a confidence interval just gives the most plausible values for the estimate, given the data collected. We would do a double take, and maybe a spit take, if we found a result outside of the interval.

Summary

Here's a handy list of what to look for from a statistical report.

  • The test statistic is measured from the sample, and it is compared to the sampling distribution of the null hypothesis. It goes fishing with the P-value on the weekends.
  • Bigger samples give more precise estimates. We like that.
  • When the P-value is smaller than 0.05, we can reject the null hypothesis. When it is larger than 0.05, we fail to reject the null hypothesis. That's simple enough.
  • When a value is inside of a confidence interval, it is consistent with our data. Unlikely values are outside of the interval. Small-sized samples will tend to have larger intervals, which means they are less precise.

All of what we've said assumes that the data were gathered using random sampling, of course. Non-random sampling makes the statistical results suspect faster than yelling "I did it" at the police.