Tired of ads?
Join today and never see them again.
Equipment broke down,
We used the wrong tool.
Oh baby, baby,
Oops we made a mistake,
Measured the wrong lake.
That lil' error is not that innocent.
Okay, so maybe we're not as good as Weird Al Yankovic with our song parodies, but we're still proud of this one. That's because this section is all about errors in experiments. And a little bit about pop music. We'll cover where errors come from, what to do if you make an uh-oh, and how to spot them in other experiments.
Let's start with where errors come from. As we know quite well, no human is perfect. Which means we are often times the source of error in an experiment. For example, we might pour just a teensy bit too much liquid into that graduated cylinder, or not fully stretch that plant leaf out when we're measuring it. Or maybe we go bigger and accidentally spill a chemical into our solution that doesn't belong there, or sneeze onto our petri dish full of doorknob bacteria. As hard as we try to be perfect, it doesn't always work out that way.
A pretty scary example of human error occurred on an Air Canada flight in 1983. The fuel gauge on the plane was broken, so the grounds crew was using other ways to measure how much fuel had been loaded onto the plane. When they converted the fuel's volume into weight, they used pounds instead of kilograms, which meant the plane had half as much fuel as they thought it did. Needless to say, the plane made a very dramatic landing very far short of its intended destination.
The best way to reduce human error is to simply be careful. We should always use equipment properly, follow directions carefully, and try not to hiccup while we're pouring liquids. There's a reason our teacher droned on about reading a meniscus from the bottom and spent forever talking about proper petri dish procedures. They're trying to help us avoid those pesky errors. If something still goes awry, we shouldn't beat ourselves up over it (unless it causes a plane to make an emergency landing). We have to get back in the saddle, ask for help if we need it, and repeat that experiment a bunch more times to get those accurate data.
The next source of error is our equipment. Yes, yes, we thought those Erlenmeyer flasks were supposed to be our faithful servants too. Newsflash: equipment isn't perfect either. Most lab equipment, from glassware to scales to the latest particle accelerator, comes with their own range of error. For example, when we pour water from a glass, some of the droplets stick to the inside of the glass. This means the water we poured out is a little less than the water we put in. Bam, error.
Just how much error does glassware introduce? Let's say we take a look at the side of a 100mL beaker. We might notice that it says ±2 mL. This tells us that when we measure out 100mL of a liquid, we could actually have anywhere from 98 mL to 102 mL. Graduated cylinders, pipettes, scales, and pretty much any other lab equipment that gets used to measure stuff all have their own amount of error.
Reducing this kind of error is easy. We just have to choose the right equipment for the job. That means picking the piece of equipment that measures what we need it to with the least amount of error. Let's say we need 2 mL of water. We could use a 100 mL flask with an error of ±2mL or a 10 mL pipette with an error of 0.02 mL. Pass the pipette, please.
As any person who has dealt with a computer can confirm, equipment can simply malfunction as well. We could be hummin' along, measuring the mass of every penny we can find when our scale suddenly goes batty and tells us a penny weighs forty pounds. That would definitely be an error. And of course, if that equipment isn't calibrated correctly, all of our data are going to be off. But at least it'll be off by the same amount. No, we aren't comforted by that fact either.
Reducing equipment error is fairly easy. We can start by giving our equipment a once-over before we get started. We'll want to be on the lookout for cracks, chips, missing or broken parts, and anything else that might affect our ability to collect accurate data. Leaky glassware and chipped meter sticks aren't going to do anyone's data any favors, so it's best to trade those in for functioning goods.
We'll also want to make sure whatever tool we're using is calibrated correctly and in good condition. How we calibrate our equipment depends on what the manufacturer suggests, so we may need to spend some quality time with the instruction manual. Again, repeating our experiment can help to reduce some of this error, but keep in mind that if equipment isn't calibrated correctly, we're just repeating the error with every experiment.
Lastly there's random error. This is tough to avoid because it's, well, random. This is stuff like a breeze blowing our scale as we're measuring the mass of something or a rogue cloud shading one plant and not another.
A great example of random error occurred when scientists built the Hubble space telescope. This super fancy, very expensive telescope's first pictures were all kinds of blurry. Talk about awkward. It was later discovered that the telescope's main mirror wasn't cut at the right angle and that it was off by 1/50th of the thickness of a human hair. How did this happen? One theory is that a tiny speck of paint on the machine used to test the mirror's angle produced the wrong results and made scientists think the Hubble was good to go. Luckily, they were able to fix the mistake with another fancier, more expensive machine called the Corrective Optics Space Telescope Axial Replacement.
The best way to reduce random error is by performing an experiment more than once. In fact, perform it lots of times, then take the average. That random data point will fizzle into the background with each accurate data point we collect. Also, we want to make sure that our equipment is in good condition and try to get rid of any "distractions," like paint specks, that may introduce random error into our experiment.
We all make mistakes, so why is error such a big deal in science? Well, for one, we're all trusting scientists to get it right. Every time we step on an airplane, take some medication, or spend millions of dollars on a space telescope, we're trusting that scientists used the right units, calibrated their equipment correctly, and double-checked their technique. Error can also mean the difference between a Nobel Prize and a Cracker Jack prize, so scientists don't take mistakes lightly. Scientists are constantly calculating error to make sure it's within an acceptable range and things are progressing as they would expect them to.
It's always a good idea to know how much error has been introduced into our experiment. We use this handy dandy equation to help us calculate the percentage error in a measurement:
The smaller the percentage, the closer we are to being awesome. If we get a big number, well, we might need to go back and see where we went wrong. Of course, this equation only works if we know what we're supposed to get. Sometimes, we're in uncharted territory and don't have a target value to aim for. In this case, we just have to do our best to collect accurate data and analyze it properly.
So, how do we know if a scientist's experiment has error in it? Well, hopefully they calculated it and wrote about it in their conclusions. If they're on the shadier side of the science spectrum, those errors might be hidden better than Waldo. The best way to find errors is to look for data that don't quite fit the trend of the rest of the data. This is easy if there's a graph, but might take some more brainpower if they don't have one or their data are a mess. Also, good ol' common sense goes a long way. If the data just don't seem right or seem too good to be true, ask to see those numbers run again.
Error happens to the even the best, most careful scientists. So what should we do if an error sneaks into our science soiree? Well, it depends on where the error came from. In general, sweeping funky numbers under the rug is a serious offense in the science world. However, if we know for a fact that an error came from our own mistake, like spilling some of a liquid sample or writing a number down wrong, we can show that data the door so it doesn't affect how our results are interpreted.
Whether we keep or discard our erroneous data, it's always a good idea to discuss any oops that may have occurred in our report. We can also discuss if there were any equipment limitations, like not having the correct size of graduated cylinder, or environmental factors, like gusty winds or extra humidity, that may have introduced some error into our experiment. Being open about our errors lets other scientists interpret our data better and avoid making the same mistakes. They may even come up with a brand new way of doing things that reduces the chances of making a certain error.
Another source of error that can pop up in experiments is bias. Scientific bias is when someone's expectations influence the results of the experiment. Bias can creep into any aspect of an experiment, from analyzing data to the peer review process, and it can cause an experiment's results to be collected or interpreted incorrectly.
Bias is pretty sneaky in that most of the time scientists don't even realize they're biased. They may interpret data or make observations based on what they think is going to happen, or ignore observations that refute their hypothesis, without even realizing that they're doing it.
How can we avoid bias? It's nearly impossible to eliminate all bias, but conducting blind studies is one way to eliminate bias. Don't worry, a blind study doesn't involve gouging our eyes out. It just keeps information that may influence the scientist hidden while they're interpreting their data. For example, a scientist may not know which patients received the drug and which received the placebo (a non-functional drug that acts as a control). We can also perform double-blind studies, where the both the scientist and the test subject don't know what they've received. This allows scientists to collect data from unbiased patients and then interpret the data without introducing their own bias.
Cool, this whole section was about mistakes. Here's a recap:
The Millennium Bridge across the Thames River in London took eighteen months to build. Once the bridge was finally opened to the public, it was closed thirty minutes later. And it was all because of a mathematical error.
When engineers designed the bridge, they used a 2D model. They thought they were being super fancy by accounting for the up and down movement on the bridge, so this wouldn't happen.
What they forgot to account for was side-to-side motion. See, what happens when a whole bunch of people walk on the bridge is that it starts to sway side-to-side. You may have experienced this phenomenon on a playground bridge. Hopefully you didn't fall in the imaginary lava.
Fixing this swaying bridge tacked on another $9 million to the already $30 million price tag. Errors aren't cheap.