Z-Score
  
A z-score is a measure of how far an individual data point in a data set is away from the mean, measured in units of the standard deviation of the data set.
We calculate a z-score by subtracting the mean from the data point and then dividing the result by the standard deviation. Z-scores are a convenient way to compare two numbers that come from different data sets.
Let’s say we have a Red Delicious apple that weighs 5.9 ounces, and a navel orange that weighs 8.9 ounces. We can determine which is relatively larger by finding the z-score for each fruit. Red Delicious apples have a mean weight of 5.3 ounces with a standard deviation of 0.6 ounces. The z-score for our apple will have us subtract the mean (5.3) from the apple weight (5.9), giving us 0.6. We then divide that value by the standard deviation (0.6 0.6) to get a z-score of 1. Navel oranges have a mean weight of 8 ounces with a standard deviation of 1.5 ounces. That gives us (8.9 – 8) 1.5 or 0.9 1.5, which gives us a z-score of 0.6 for the orange. Our apple has a larger z-score and is therefore larger, relative to their respective fruits, than our orange.
Who says you can’t compare apples and oranges?