© 2014 Shmoop University, Inc. All rights reserved.
 

Topics

Most calculus courses are very big on the idea of limits. In particular, the books are adamant about defining an integral as the limit of a bunch of finite sums.

This isn't how the development of calculus actually went. Integrals came before limits. The history of calculus goes more like this:

Newton and Leibniz: Hey, let's chop up a region into infinitely many rectangles that are infinitesimally thin, then add up their areas to find the area of the original region! Infinitesimal quantities are neat!

Other Mathematicians: Um... what's an infinitesimal? Any real number that small must be 0. Great ideas, guys, but we need a way to make this more rigorous. How can we do math with things when we don't know what they are?

Later Mathematicians: We can use limits to make that calculus stuff more rigorous! This is how everyone should do it from now on!

Abraham Robinson: Actually, Newton and Leibniz had the right idea. We can make infinitesimals rigorous after all.

Ghosts of Newton and Leibniz: Hah! Told you so!

The point is that it's perfectly fine to think of an integral as an infinite sum of infinitesimally skinny things. That's how integrals were developed in the first place, and for some people this idea makes a lot more intuitive sense than the "limit" definition. It's possible to do calculus without having limits at all, but this idea hasn't caught on in most schools.

You can also think of integrals as a way to multiply two numbers where one of the numbers is changing. When the numbers are constant we have the formula

distance = speed × time.

When speed is changing, we have

Advertisement
Advertisement
Advertisement
back to top