I read the definition of differentiability as given in Tom M. Apostol's Mathematical Analysis as : The function $f$ is said to be differentiable at $c$ if there exists a linear function $T_c : \mathbb R^n \to \mathbb R^m$ such that $f(c + v) = f(c) + T_c (v) + ||v|| E_c (v)$, where $E_c (v) \to 0$ as $v \to 0$. There $T_c()$ is called a linear function and called the total derivative of $f$ at $c$, $c$ is a point in $\mathbb R^n$. I can relate to the "error term" $E_c$ as I am familiar with Taylor's formula and this is a first order approximation of $f$. But I can't make a connection with functions in one variable. Is there a total derivative equivalent in one dimension? And also, everywhere on the internet this definition seems to be obscure and I only find total derivative defined as $df/dt$, like here. What connection am I missing?
- 1$\begingroup$ A $1 \times 1$ matrix is identified with a single real entry. $\endgroup$Randall– Randall2017-09-13 19:14:59 +00:00Commented Sep 13, 2017 at 19:14
- $\begingroup$ Tom has failed. Cash out and limit our losses before something worse happens. $\endgroup$mathreadler– mathreadler2017-09-13 19:15:16 +00:00Commented Sep 13, 2017 at 19:15
- $\begingroup$ @Randall can you elaborate please, I don't completely understand $\endgroup$john doe– john doe2017-09-13 19:15:45 +00:00Commented Sep 13, 2017 at 19:15
- $\begingroup$ Given $f: \mathbb{R}^n \rightarrow \mathbb{R}^m$ and a point $\mathbf{v}$ in the domain, $df_\mathbf{v}$ is the linear map you mention, which is represented by an $m \times n$ matrix. That's the total derivative. In single-variable calculus you say $f'(2)=4$ instead of discussing the matrix $df_2 = (4)$. $\endgroup$Randall– Randall2017-09-13 19:17:37 +00:00Commented Sep 13, 2017 at 19:17
- $\begingroup$ How is $df_\mathbf{v}$ linear? And how is $f'(x)$ linear? Eg. when $f'(x)$ = $x^2$, $f'(cx)$ = $c^2f'(x)$? $\endgroup$john doe– john doe2017-09-13 19:24:05 +00:00Commented Sep 13, 2017 at 19:24
1 Answer
Take $f: \mathbb{R}^n \rightarrow \mathbb{R}^m$ (domain can also be an open subset of $\mathbb{R}^n$). We say $f$ is differentiable at $c$ if there exists a linear transformation $T_\mathbf{c}: \mathbb{R}^n \rightarrow \mathbb{R}^m$ such that $$ \lim_{\mathbf{h} \to 0} \ \frac{|f(\mathbf{c}+\mathbf{h})-f(\mathbf{c})-T_\mathbf{c}(\mathbf{h})|}{|\mathbf{h}|} = 0. $$ This is equivalent to what you wrote.
At any rate, we write $df_\mathbf{c} = T_\mathbf{c}$ when it exists. By definition, $df_\mathbf{c}$ is a linear transformation (input variable $\mathbf{h}$) and so is representable by an $m \times n$ matrix $A$. In the general case this is the matrix of partials of the component functions.
In the 1-variable case, this is a $1 \times 1$ matrix, which is just a scalar and the ordinary derivative you're used to.
Example: let $f(x)=x^2$ and let's examine $f'(3)$ (which will be 6, trust me). The linear transformation $T: \mathbb{R} \to \mathbb{R}$ via $T(h)=6h$ satisfies my condition: $$ \lim_{h \to 0} \ \frac{f(3+h)-f(3)-6h}{h} = 0. $$ (I don't need norms anymore since my $h$ is now a scalar, not a vector: single variable calculus!)
Hence $T(h)=6h$ is the magic transformation, and as $T(1)=6$ we write the derivative as $f'(3)=6$, which is what you've always known from Calc I.
- $\begingroup$ Oh yes, it all makes sense now! $\endgroup$john doe– john doe2017-09-13 19:49:21 +00:00Commented Sep 13, 2017 at 19:49
- $\begingroup$ Wow, really? Great! It took a looooong time for this to hit me when I first encountered it. $\endgroup$Randall– Randall2017-09-13 19:49:56 +00:00Commented Sep 13, 2017 at 19:49
- $\begingroup$ It took me quite long trying to understand it by myself before I posted the question here. $\endgroup$john doe– john doe2017-09-13 19:51:40 +00:00Commented Sep 13, 2017 at 19:51