Data Smoothing

When you're trying to draw conclusions from data that make some form of rational sense, you don't want to include extraneous information. It's a huge help if, say, a third of the data is a complete waste and you can just throw it out. Like taking off your backpack this time before you swim that race. This data dieting, not sponsored by Jenny Craig, is called data smoothing. The process deploys an algorithm to remove “noise” or any unnecessary data points so that important patterns will stand out.

This pre-processing of data could be particularly useful in making predictions on future fuel prices, such as using a moving average from past history. A common example of data smoothing is the wild and crazy notion of a moving average. Let’s say we have a series of eight test scores already in the can during a semester, and mom just saw our class average online. She's juuuust about to ground us for eternity.

We posted a 62, 58, 41, 65, 71, 77, 79, and 85. Our overall class average isn’t great...67.25, in fact. But what if we calculate a moving average by taking three scores at time and moving across the timeline of scores one at a time: the average of 62, 58, 41, then the average of 58, 41, 65, then the average of 41, 65, 71, and so on. We get moving averages of 53.67, 54.67, 59, 71, 75.67, and 80.33. We can legitimately argue to mom that our scores are increasing throughout the semester by the overall upward trend in the moving average. That trend looks a whole lot better than the overall average. And yeah, mom. Can't blame us for trying.

Find other enlightening terms in Shmoop Finance Genius Bar(f)