Confidence Interval

Possibly the most incorrectly interpreted topic in all of statistics by those who want to use its vast power.

A confidence interval is a range of values calculated from a sample that can be used to estimate a population parameter (like a mean, standard deviation, or percentage) that we couldn’t measure completely and accurately because we simply can’t measure every member/survey of the population (because we’re not the Flash).

Correctly interpreted, a confidence interval tells us that we are X% confident that the true parameter (mean, standard deviation, percentage, etc.) lies between some lower limit and upper limit.

The X% is determined ahead of time and represents just how confident we want to be in our range possibly containing the true value. Incorrectly interpreted, it makes us mad...like strangling mad.

We might take a survey of 100 American adults about how many consider Netflix their favorite online viewing platform as a way to determine what all American adults feel about Netflix. We decide on a 95% confidence level (the most commonly used level) and determine a confidence interval of 0.41 to 0.54. We can be 95% confident that the true percentage of American adults who think Netflix is the bomb-iest of the bombs lies between 41% and 54%.

Are we sure it lies in that range? Nope, nor can we be, but we are 95% confident it does.

Find other enlightening terms in Shmoop Finance Genius Bar(f)