The only reason physics is able to explain the world is because people did experiments. Over the centuries we’ve measured everything from the strength of gravity, to the height of David Beckham’s mohawk. A big part of physics is understanding and using our measurements in the right way. It’s no good spending days collecting data if we don’t know what it means or how to use it. We might as well have spent the day at the movies, eating popcorn and shushing people in the back row.
When making measurements in a physics experiment, it’s important to always use SI (standard international) units. The official list of basic SI units are meters, kilograms, seconds, amperes, kelvin, moles and candelas. That means units like inches, pounds, hours, or Fahrenheit, are banished to the scrapheap of history. We never liked them anyway. SI units work in tens, and are a lot more convenient for scientists.
Hold on a minute: Surely there are more units than those seven? That’s true, it’s just that every other unit is a combination of the seven SI units. For example, force is measured in newtons, but a newton is really a meter-kilogram-per-second-squared. Energy is measured in joules, which are just newton meters. As we go through physics topic by topic, we’ll learn the correct units for everything.
Every good physics experiment involves measuring two things and plotting them on a scatter graph. The variable we changed (the independent variable) is plotted on the x-axis, and the resulting variable we looked at (the dependent variable) is plotted on the y-axis. We then draw a line of best fit to represent that data. An important tip: don’t assume that lines are straight. A line of best fit could be as curvy as a cartoon villain’s mustache, as long as it fits the data.
When talking about that data, scientists love to use long words and make themselves sound clever, but it’s important that we know what those words actually mean. Otherwise we’ll end up sounding like an episode of star trek. Telling someone that “The infuser was decoupling, so I polarized the EM inhibitor,” might work on the star ship Enterprise, but it won’t serve us well during a physics exam. An example of this is the difference between precision and accuracy. People love to use the words interchangeably, but they mean different things. Precision is how many digits our measurements are made to, and accuracy is how correct our measurements are.
Our data will also have experimental error, otherwise known as uncertainty. This isn’t because you suck at collecting data, though perhaps you do – we’re not going to judge. The truth is that even the best scientists in the world have to deal with error. For example, if a weighing scale measures to the nearest 0.1 of a kilogram, then our uncertainty is plus or minus 0.1 kilograms. This is called instrument error, and makes our data less precise. However, let’s say it turns out that the scale was reading 0.5 kilograms too heavy – all of our readings are off by 0.5 kilograms. That’s an example of systematic error, which makes our data less accurate. There’s also something called random error, which is a natural variation: like how if we’re timing something with a stopwatch, we won’t quite get the same number every time. This is another thing that makes our data less precise.