Standard Deviation

Categories: Metrics

One of the most common measures of spread, or how far data points are from each other in a data set. The standard deviation is how far we expect a randomly-selected data point to be from the mean of the data set, or more simply (but also a little less correctly), it’s the average distance each data point is from the mean.

For samples of data, we find the mean of the data first. Then we subtract the mean from each data point. We square those differences and then add them up. We divide by one less than the number of data points, and then take the square root of that value. Boom. Instant standard deviation. The standard deviation is the square root of the variance.

While the standard deviation is used willy-nilly pretty much everywhere, it should not be used in data sets that have obvious outliers or skew in one direction or another.

Find other enlightening terms in Shmoop Finance Genius Bar(f)