2
$\begingroup$

Epsilon-delta definitions are obviously better than infinitesimal-based definitions because of tradition dating from the heroic era of Weierstrass's disciples.

On the other hand, in an old comment, Deane Yang mentioned that engineers have no use for the first quantifier in epsilon-delta definitions, since they have a certain fixed allowable error (i.e., fixed $\epsilon>0$), for which they are looking for a suitable $\delta$. For example, if some engineering problem is formalized by a function $f(x)$ at a point $c$, then for a fixed $\epsilon_0$ engineers will be faced with the following problem: determine whether $$ \exists \delta>0 (|x-c|<\delta \longrightarrow |f(x)-f(c)|<\epsilon_0). $$ This problem contains no quantifier alternations at all. It is specifically the quantifier alternations that are famously the source of the difficulty for students learning calculus. People sometimes mention engineering applications as justification for working specifically with the epsilon-delta definition, even though it has more quantifier alternations than the equivalent definitions using infinitesimals (for example, continuity of $f$ at $c$ is expressed by the condition "if $x-c$ is infinitesimal then $f(x)-f(c)$ is infinitesimal"). But given the analysis above, epsilon-delta definitions with their quantifier alternations are mostly irrelevant to engineering applications. So the idea that the difficulty of epsilon-delta definitions is justified by their usefulness in practical applications would seem to fall off.

There are additional reasons why epsilon-delta definitions may be preferable to infinitesimal ones, such as tradition (as per above) and the fact that an ovewhelming majority of schools and universities use the epsilon-delta approach. But:

Question 1. is it correct to assert that the specific reason based on engineering applications and the like, falls off as per the analysis above?

The assumption that epsilon-delta is somehow helpful for practical problems of approximation and estimation (and more specifically for engineering students) can be traced as far back as 1977, when Bishop wrote:

"(ε, δ)-definition of limit is common sense, and moreover it is central to the important practical problems of approximation and estimation" (Bishop, Errett (1977), "Review: H. Jerome Keisler, Elementary calculus", Bull. Amer. Math. Soc., 83: 205–208, doi:10.1090/s0002-9904-1977-14264-x)

It can also be seen in this highly upvoted 2012 answer to an open question: https://mathoverflow.net/questions/88540/how-to-motivate-and-present-epsilon-delta-proofs-to-undergraduates/88561#88561 It precedes Gubkin's 2014 answer at the Education SE.

Another response assuming that epsilon-delta is necessary for engineering students: https://matheducators.stackexchange.com/a/20803

A similar assumption is made in this recent article by Hill and More:

Hill, G. More, T. "Connecting experimental uncertainty to Calculus and to Engineering Design". Journal of Higher Education Theory and Practice 21(5) (2021), 208-213.

Question 2. Are there earlier sources for the assumption that epsilon-delta is helpful to engineering students, preferably in a calculus textbook?

An interesting case study is the textbook by Thomas and Finney. They often emphasize engineering applications. They also talk about error tolerance (pages 68, 69, 70, 103, 106, 253, 272 of the 9th edition). But significantly, they view discussions of error tolerance as a way of motivating epsilon-delta definitions (whether they are successful is another matter). They never present epsilon-delta definitions as useful material for engineers to learn as a way of preparing them for the idea of error tolerance.

Gubkin wrote: "I hope you can see that the basic form of the ϵ-δ argument does mirror the sort of analysis that a practical user of mathematics would need to employ when they are thinking about error in their measurements. Making this connection is, in my opinion, important to motivate the ϵ-δ definition of continuity." This is true, but it does not follow that engineering students should learn epsilon-delta as motivation for error tolerance. The motivation goes the other way, as illustrated by the examples in Thomas and Finney.

$\endgroup$
30
  • 3
    $\begingroup$ Tradition does not make something better. Infinitesimals were only made rigorous in the 1960s, so of course the tradition is epsilon delta limits. $\endgroup$ Commented May 2 at 15:22
  • 6
    $\begingroup$ The premise of the question strikes me as utterly unrealistic. The question whether there exists a $\delta > 0$ that does something is just as irrelevant for a concrete engineering task as the question whether something is true for all $\varepsilon > 0$. If you want to build or implement something, you have practical constraints, not some abstract question about whether something exsits. Moreover, in many tasks engineers don't have such an explicit function $f$ to work with, and even in situations where they have one they often won't phrase the problem like this. $\endgroup$ Commented May 2 at 19:34
  • 6
    $\begingroup$ "People sometimes mention engineering applications as justification for working specifically with the epsilon-delta definition..." I've never encountered, nor heard of this. Is there a reference to support this claim? $\endgroup$ Commented May 3 at 15:56
  • 4
    $\begingroup$ As an engineer, I have never used epsilon-delta for any engineering application, nor have I ever spoken with an engineer that used them. Like @DanielR.Collins, I would like a reference to support the claim that we use them at all. If we need to do calculus, we typically assume calculus works. And we also remember that everything in engineering has tolerances associated with it. We almost never get to care about the case where the tolerances approach 0 with nice well-behaved closed form equations. $\endgroup$ Commented May 3 at 19:18
  • 4
    $\begingroup$ That is not a strong reference. (Throwaway comment, deep in a comment thread, on a lowest-ranked answer, to a SE MO question, that got closed, posed by yourself) $\endgroup$ Commented May 5 at 14:03

3 Answers 3

2
$\begingroup$

Epsilon-Delta is really a pure math concept. Tying it to engineering activities does not really help with understanding.

Epsilon-Delta doesn't really come up in engineering at all. For context, I've been an engineer for several decades. The last time I had to think about epsilon-delta was in my Calculus I course where they were introduced. We did a few of them, and then said "that's nice. Now let's build up a playbook of rules so that we never have to do one of these again. For the entirety of my career, I have been permitted to assume that calculus works.

Indeed, we don't even need epsilon-delta to do calculus. It was nearly 200 years from the creation of calculus to the first time we see epsilon-delta in its current form. The purpose of epsilon-delta is to put calculus on a sound mathematical footing. Engineers simply assume said footing is sound.

There's some superficial similarity to error bounding activities we do in engineering. However, in our case, we usually have some goal of finding a specific $\delta$ -- typically either a maximum or some process-specified value. Showing that one exists for an arbitrary range of epsilons is very different. One often accepts a far worse formula for $\delta$ from a finite estimation perspective because it makes it easier to demonstrate that the inequality holds for the trickier real numbers as one marches towards infinitesimals. Indeed, the epsilon-delta proofs don't even need to be constructive. I don't have to even come up with a function $\delta(\epsilon)$, merely show that one must exist. In engineering, its very rare to find use for such non-constructive proofs. We almost always have to find the values.

You quote Bishop saying it's "...central to the important practical problems of approximation and estimation." I don't have that book, so I can't see the context, but I would generally disagree. I do approximation and estimation all the time. Never used epsilon-delta in any way. I don't just care that an estimation process converges on the correct result when taken to the limit. I care that it gets there in a practical amount of time. A great example of this sort of careabout is matrix multiplication. There are some astonishingly fast ways to do matrix multiplication when one looks at arbitrarily large matrices. However, the "fastest" algorithms in the limit sense are called "galactic algorithms." They are called this because the time constants involved are so large that these fancy multiplication methods don't beat out the simpler ones until you have a computer the size of our galaxy storing a single matrix. Sure, I can prove out that limit using epsilon-delta if I were inclined to do so. But as an engineer, those algorithms simply aren't applicable to me. I also care greatly about precision losses that arise from approximating the equations using floating point numbers (IEEE-754). There's plenty of algorithms which work in theory, but in practice they run into catastrophic cancellation issues. Those aren't even technically operating over real numbers anymore, so the formalism of epsilon-delta is lost in translation.

$\endgroup$
4
$\begingroup$

I would think that an engineer would commonly want to investigate a range of possible options: what $\delta$ would I need to get $|f(x)-f(x)|< \epsilon_1,\epsilon_2,$ or $\epsilon_3$? These might represent different specifications needed for different applications, or just part of a cost/benefit analysis. It is easier to solve one problem using a variable $\epsilon$ than to redo the same calculations for $3$ different particular values of $\epsilon$.

EDIT:

An engineer really wouldn't ever care about a bare existence theorem. They are only interested in constructive mathematics. The constructive way to prove that $\forall \epsilon >0 \, \exists \delta >0\, |x-c| < \delta \implies |f(x)-f(c)|<\epsilon$ is to produce a function $T:\mathbb{R}^+ \to \mathbb{R}^+$ for which, for all $\epsilon$, $|x-c|<T(\epsilon) \implies |f(x)-f(c)|<\epsilon$. In other words it is useless to know that some $\delta$ exists for a given $\epsilon$ unless you have a way to find it ($T$). Oftentimes there will be greater cost associated with a smaller $\delta$, and we will want to find the largest $\delta$ which will get the job done. In other words, it is desirable to find an explicit formula for $$T(\epsilon) = \sup \{\delta >0: |x - c|<\delta \implies |f(x) - f(c)|<\epsilon\}$$

if possible. Sometimes explicitly calculating this supremum is impossible and we have to settle for a suboptimal $T$.

A practical example:

We are designing a rocket to be launched with an initial velocity of exactly $1800 \, \frac{\textrm{m}}{\textrm{s}}$. We want to hit a target exactly $300 \, \textrm{km}$ away. To minimize civilian casualties we would like the blast radius of the payload to be as small as possible. However, the smaller the blast radius we specify the more precisely we will need to control the altitude ($\theta$). Engineering more precise control of that angle is costly, so we will need to conduct a cost/benefit analysis.

  1. Assuming flat ground and no air resistance, what value $\theta$ would be needed to hit the target exactly?
  2. We are interested in exploring a range of possible payload strengths. Call the blast radius of the payload $\epsilon$ meters. What is the maximum tolerable error ($\delta$ radians) of $\theta$ to make sure that our target is caught in the blast?

The best answer to this question is to conscientiously object to perform the calculation. The second best answer will reproduce a standard $\epsilon$-$\delta$ style proof of the continuity of the function $R(\theta) = \frac{v_0 \sin(2\theta)}{g}$ at the value of $\theta_0$ which solves $R(\theta_0) = 300$.

Note: There are some obvious objections to this particular problem on practical grounds:

  1. We assumed exact knowledge of the initial velocity and distance to the target.
  2. We assumed no air resistance and perfectly flat ground (no elevation change between rocket origin and target).
  3. We implicitly assume that any blast radius $\epsilon$ meters would be practically relevant, with smaller $\epsilon$ having greater benefit in terms of civilian safety. There is, of course, some lower limit to what is practical here. An $\epsilon$ of $0.01$ is no safer than an $\epsilon$ of $0.1$. In reality we probably only need to worry about $\epsilon \in [0.1, 50]$.

Even so, I hope you can see that the basic form of the $\epsilon$-$\delta$ argument does mirror the sort of analysis that a practical user of mathematics would need to employ when they are thinking about error in their measurements. Making this connection is, in my opinion, important to motivate the $\epsilon$-$\delta$ definition of continuity. In fact, we could give the following alternative definition of the continuity of a function $f$ at a point $x = x_0$ which aligns with this motivation:

Definition: A function $f:(a,b) \to \mathbb{R}$ is continuous at $x_0 \in (a,b)$ if and only if there is a function $T:\mathbb{R}^+ \to \mathbb{R}^+$ so that, for all $\epsilon > 0$ if $|x - x_0| < T(\epsilon)$ then $|f(x) - f(x_0)| < \epsilon$.

In other words, if you want to control $f(x)$ to within $\pm \epsilon$ of the desired target, we have a recipe for figuring out what error tolerance $T(\epsilon)$ we must achieve for the inputs (i.e. $x_0 \pm T(\epsilon)$) to obtain the desired level of accuracy in the outputs.

ADDITIONAL EDIT:

Of course, such simplistic examples are not representative of what engineers will do in the real world. The kinds of examples one can present in an intro calculus class are necessarily toy examples. You have to crawl before you can walk, and walk before you can run. Here is an example (found by ChatGPT!) of "running":

https://www.mdpi.com/2076-3417/13/8/4999?utm_source=chatgpt.com

As a satellite’s critical load-bearing structure, the large-scale space deployable mechanism (LSDM) is currently assembled using ground precision constraints, which ignores the difference between the ground and space environments. This has resulted in considerable service performance uncertainties in space. To improve satellite service performance, an assembly error model considering the space environment and a tolerance dynamic allocation method based on as-built data are proposed in this paper. Firstly, the factors influencing the service performance during ground assembly were analyzed. Secondly, an assembly error model was constructed, which considers the influence factors of the ground and space environment. Thirdly, on the basis of the assembly error model, the tolerance dynamic allocation method based on as-built data was proposed, which can effectively reduce the assembly difficulty and cost on the premise of ensuring service performance. Finally, the proposed method was validated in an assembly site, and the results show that the pointing accuracy, which is the core indicator of the satellite service performance, was improved from 0.068° to 0.045° and that the assembly cost was reduced by about 13.5%.

A casual glance through the paper will show a much more sophisticated setup and set of approaches, but the basic goal is still to figure out what kind of control of some "input variables" is needed to achieve a desired control of some "output variables". This is epsilon/delta.

$\endgroup$
5
  • $\begingroup$ You commented a few years ago as follows: "In every application, you need to know how good a control on inputs you need to guarantee a given error tolerance on the outputs. Without this bridges collapse, and airplanes fail to get off the ground." U agree that the idea of error tolerance is the issue here. The conjunction formula you propose is equivalent to the one I mentioned in my question (without quantifier alternations) with minimum of $\epsilon_1, \epsilon_2$, and $\epsilon_3$ for the value of $\epsilon_0$. $\endgroup$ Commented May 5 at 6:17
  • $\begingroup$ @MikhailKatz Editing my answer because my reply is too long for a comment. $\endgroup$ Commented May 5 at 11:34
  • $\begingroup$ Steven, I didn't downvote, but it seems to me that you are reproducing Bishop's claim that the constructive approach to epsilon-delta is useful in "practical applications", without providing convincing evidence for this. $\endgroup$ Commented May 12 at 8:32
  • $\begingroup$ @MikhailKatz Do you disagree that if you want to hit a target with a projectile that it is useful to consider various error tolerances for the target, and figure out what kind of control over the inputs you would need to achieve those tolerances? This seems just "plainly" useful. $\endgroup$ Commented May 12 at 12:05
  • $\begingroup$ I made one further edit. $\endgroup$ Commented May 12 at 12:17
3
$\begingroup$

One frequently occurring task in engineering is to decompose one complex problem into a bunch of smaller simpler problems.

The epsilon-delta definition allows one to do this, because it allows to chain together many tiny steps to obtain one big complex result in the end.

Suppose that you have three statements similar to that in your question:

  • $\forall \color{red}{\epsilon > 0} . \exists \color{green}{\delta > 0} .\forall x_1, x_2.|x_1 - x_2| <\delta \to |f(x_1) - f(x_2)| < \epsilon$
  • $\forall \color{green}{\delta > 0} . \exists \color{blue}{\gamma > 0} .\forall y_1, y_2.|y_1 - y_2| <\gamma \to |x(y_1) - x(y_2)| < \delta$
  • $\forall \color{blue}{\gamma > 0}.\exists \color{magenta}{\beta >0}. \forall z_1,z_2.|z_1 - z_2| < \beta \to |y(z_1) - y(z_2)| < \gamma$

Then you can nicely compose them all together to infer:

  • $\forall\color{red}{\epsilon>0}.\exists\color{blue}{\gamma > 0}.\forall y_1,y_2.|y_1 - y_2|<\gamma\to |f(x(y_1)) - f(x(y_2))| < \epsilon$ (first two)
  • $\forall\color{red}{\epsilon>0}.\exists\color{magenta}{\beta > 0}.\forall z_1,z_2.|z_1 - z_2| < \beta \to |f(x(y(z_1))) - f(x(y(z_2)))| < \epsilon$ (all three).

Here, you immediately obtain that the "complex" function $f \circ x \circ y$ is uniformly continuous whenever $f$, $x$ and $y$ are uniformly continuous.

This is because each of those "$\epsilon$-$\delta$" (or "$\gamma$-$\beta$", or whatever) statements acts as a little composable box with inputs ($\epsilon$) and outputs ($\delta$), and you can easily chain multiple such boxes together through their compatible "interfaces".

Moreover, if each of those "existential" statements is actually constructive, i.e. gives you an actual "$\delta$" for each "$\epsilon$" (for simple engineering applications, this is often constructive), then you can literally chain together a bunch of little modular "$\epsilon$-$\delta$" algorithms that will give you the required estimates for the big complex problem.

Without the "$\epsilon$" part, this entire "lego set" becomes useless, because nothing can be composed any more: instead of composable $\epsilon$-$\delta$ "lego bricks" you would give your students a pile of irregularly shaped pebbles, out of which nothing interesting can be built.

$\endgroup$
2
  • $\begingroup$ Agreed: if one is not allowed to quantify over the epsilon, it wouldn't make a very good mathematical definition. But that's not what the question is about. $\endgroup$ Commented May 5 at 6:34
  • 1
    $\begingroup$ If we take Wiki's definition of "engineering" to be "The creative application of scientific principles to design [...] structures or systems, singly or *in combination*, with consideration for [...] safety to life and property.", then above I've tried to make the point that the epsilon-delta-style definitions are necessary for the compositionality (otherwise the "in combination" part does not work). Since the epsilon-delta definitions cannot be replaced by formulas with hardcoded epsilons without breaking compositionality, the "analysis" (or rather an offhandedly made comment) is incorrect $\endgroup$ Commented May 5 at 20:37

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.