I just started taking calc 1, and I’m already in love. Just wanted to ask something that stuck with me: Can we think of indefinite integrals (antiderivatives) as definite integrals with an arbitrary bottom bound? Think of this: a slight change of area under a curve ($f(x)$) is $\mathrm{d}s$. (Let the function giving the area under that curve be $s(x)$) $\mathrm{d}s$ is basically equal to an infinitesimally narrow rectangle, with a width $\mathrm{d}x$ and a length $f(x)$ as $\mathrm{d}x$ gets smaller. We write $\mathrm{d}s=\mathrm{d}x*f(x)$. Dividing both sides by $\mathrm{d}x$ gives $\frac{\mathrm{d}s}{\mathrm{d}x}=f(x)$ which tells us that the derivative of that area function is the function itself at that point. And that’s what an antiderivative isn’t it? It gives a family of functions all capable of having their derivative as $f(x)$ since a constants derivative is $0$. So we add a constant, $c$, at the end. But what does $c$ actually represent? We started off by saying $s(x)$ is an area function, which we defined to help us calculate the area under a curve. When we add a $c$, doesn’t it actually mean that it accumulates area values starting from some arbitrary point? So it is uncertain how much area it will calculate. It will precisely calculate the area under the curve, but how much? How much will it accumulate? So when we pass to definite integrals, we say $F(b)-F(a)$, subtracting the antiderivatives to find a certain area. Can’t we think of it like this: we both start those uncertain area functions (antiderivatives) from the same arbitrary point, say, $t$. From $x=t$, we accumulate up to $a$ and $b$. When we subtract. We find the accumulated area between $x=a$ and $x=b$. What do you mathematicians and math lovers think? I’m really sorry if it’s a bit too long, but I love talking about what I’m passionate about. Respect.
- 3$\begingroup$ Hi, welcome to math stack exchange. You should try to be more clear in your questions, so that we may more easily answer. 1) Try to use paragrafs to format your answer, it is hard to read. 2) Use latex to format your mathematics symbols. A quick guide can be found here math.meta.stackexchange.com/questions/5020/… $\endgroup$RicardoMM– RicardoMM2025-02-08 17:36:54 +00:00Commented Feb 8 at 17:36
- 1$\begingroup$ Thank you for the feedback. Will do. $\endgroup$Adonis– Adonis2025-02-08 17:37:26 +00:00Commented Feb 8 at 17:37
- 1$\begingroup$ Regarding your question, historicaly definite integrals came first, so you are right in the sense that the added constant is just to "subtract the bottom value". Or you can think of it in a more abstract sense, as a constant function is the only one on the real line with zero derivative everywhere, and since the derivative is linear one must add it to find all antiderivatives. $\endgroup$RicardoMM– RicardoMM2025-02-08 17:41:07 +00:00Commented Feb 8 at 17:41
- $\begingroup$ I didn’t know the history! That’s so interesting! So it’s a right train of thought, to think of the c as more of a representation of the uncertain bottom bound? Thank you in advance. $\endgroup$Adonis– Adonis2025-02-08 17:47:16 +00:00Commented Feb 8 at 17:47
- $\begingroup$ Yes, you can think of it that way, if you think in terms of a proper integral without the limits. But the point is not that, finding anti-derivatives is asking "what functions have derivative equal to this", and the answer is any function that is a constant plus the result of the primitivation, since all those functions have the same derivative. $\endgroup$RicardoMM– RicardoMM2025-02-08 18:53:48 +00:00Commented Feb 8 at 18:53
1 Answer
As a rough intuition, that's a fine way to think about it, and it will carry you a fair way through understanding how integrals work. Unfortunately, it does break down in some circumstances.
An antiderivative $F(x)$ of a function $f(x)$ is any function whose derivative is $f(x)$. Typically, a function will have a whole family of antiderivatives, and if $f(x)$ is continuous then you can describe that family by picking one particular $F(x)$ and then adding an arbitrary constant, i.e. $F(x) + C$.
If we look at functions whose domains aren't continuous, though, it gets a little trickier. For example, if we start with $f(x) = \frac{1}{x^2}$, then we can see that $F(x) = -\frac{1}{x}$ is an antiderivative of $f$, and so is any function of the form $C - \frac{1}{x}$. But, in fact, that doesn't cover the entire set of antiderivatives. If we look at
$$F(x) = \begin{cases} -\frac{1}{x} & x > 0 \\ 3 - \frac{1}{x} & x < 0 \end{cases}$$
then taking the derivative shows that it's also an antiderivative of $f$. But because there's a gap at $x = 0$ the constant doesn't have to be the same on both sides. The same thing happens for any function where there's a gap in its domain, it's not just because this particular function has vertical asymptotes.
Writing $F(x) + C$ is just shorthand that's often useful for summarising the family of antiderivatives, especially when you're looking for one particular member of the family - for example, if I write that the antiderivatives of $f(x) = x$ are $\frac{1}{2} x^2 + C$ and then I ask "What antiderivative has the value $5$ when $x = 3$?" I can plug that in and solve to get $C = -4$.