Statistical Significance

When we run a hypothesis test to test a claim about a population proportion, mean, standard deviation, or some other value, we’re trying to determine if there is evidence to support our claim. The evidence consists of a probability.

This probability, called a p-value, is the likelihood that we could have grabbed the sample we did from the population and gotten a value (proportion, mean, standard deviation, etc.) that is as extreme or more extreme (as large or larger or as small or smaller) than the value we got. If this probability is very small, then we have evidence our claim about the population might be valid. If this probability is not very small, there isn’t evidence to support our claim.

The cutoff for “very small” versus “not very small” must be decided upon in advance and is a measure of how strong our evidence must be to support our claim. When our probability is smaller than our cutoff value, we have statistically significant evidence to support the claim. When the probability is larger than that cutoff, our evidence isn’t strong enough to support our claim, and the results are not statistically significant.

Generally, we choose either 5% or 1% as the probability that we establish as the cutoff for statistical significance. We ran a hypothesis test using 5% as our level of statistical significance to test the claim that the percentage of people who eat in the Shmoop cafeteria who love Wisconsin Cheesy Chicken day is less than 73% that our food vendor claims. After carefully following all sampling procedures, we got a sample proportion of Wisconsin Cheesy Chicken day lovers of 64%.

The p-value turned out to be 0.26 or 26%. Since this is not less than our cutoff of 5%, we do not have statistically significant results meaning we can’t say that the true proportion is actually less than the 73% they state is correct.

Find other enlightening terms in Shmoop Finance Genius Bar(f)