1
$\begingroup$

Theorem: Let $a_{n}=a_{n-1}+1, a_1=1$. For any $n$, in order to compute $a_n$, it is necessary to compute $a_i$ for each $i=1,\dots,n-1$, which takes $\Theta(n)$ time.

Proof: This is vacuously true for $n=1$. Assume true for $n=k-1$. Prove true for $n=k$. In order to compute $a_{k-1}+1$, it is necessary to compute $a_{k-1}$. Then since $a_k=a_{k-1}+1$, in order to compute $a_k$, it is necessary to compute $a_{k-1}$. By the induction hypothesis, in order to compute $a_{k-1}$, it is necessary to compute $a_i$ for each $i=1,\dots,k-2$. Hence, in order to compute $a_k$, it is necessary to compute $a_i$ for each $i=1\dots,k-1$. QED

What is wrong with this proof? It seems valid to me, even though the theorem is false.

$\endgroup$
4
  • 4
    $\begingroup$ Your proof has shown that it is no worse than $\Theta(n)$, but it has not shown that there is no more clever way to compute $a_n$ $\endgroup$ Commented Mar 11, 2013 at 16:52
  • 2
    $\begingroup$ Just to be pedantic. The whole point of algorithmic complexity is that there are often many different programs that can compute the same values given the same input. A particular algorithm has a complexity - your algorithm has complexity $O(n)$. But that doesn't mean that every program that computes the same values on the same input is of complexity $O(n)$. The general quesiton of complexity is, "How can I find an algorithm which computes the same thing faster?" $\endgroup$ Commented Mar 11, 2013 at 17:21
  • $\begingroup$ Reminds me of the "All horses are the same color" paradox - en.wikipedia.org/wiki/All_horses_are_the_same_color $\endgroup$ Commented Mar 11, 2013 at 17:28
  • $\begingroup$ I'd never heard of the "all horses are the same color" paradox until now. That's a good one. $\endgroup$ Commented Mar 11, 2013 at 17:50

2 Answers 2

7
$\begingroup$

In your induction step, you made the additional assumption (beyond the inductive hypothesis) that it was necessary to compute $a_{k-1}$ in order to compute $a_k$. That's hardly the case, as simple inspection of the recursive definition gives us the closed-form definition $a_n:=n$ for all $n$.

$\endgroup$
6
  • $\begingroup$ You've shown that the theorem is false. But why is the proof invalid? In order to compute $a_{k-1}+1$, it is certainly necessary to know what $a_{k-1}$ is, so you have to compute it. Then since $a_k=a_{k-1}+1$, in order to compute $a_k$, you have to compute $a_{k-1}$. $\endgroup$ Commented Mar 11, 2013 at 17:00
  • 5
    $\begingroup$ Why is it necessary to know what $a_{k-1}$ is in order to compute $a_k$? You've not justified this, and you can't, because it isn't true. For example, I can tell you that $a_{1000000}=1000000$ without ever thinking about $a_{999999}$. Your induction step rests on this false assumption, and by assuming that a false statement is true, you allow yourself to draw the false conclusion. $\endgroup$ Commented Mar 11, 2013 at 17:05
  • $\begingroup$ If recursion were the only way to determine the values of the various $a_n$, then we would need to do an extra computation at each step. It isn't the case here, though. $\endgroup$ Commented Mar 11, 2013 at 17:07
  • 2
    $\begingroup$ You are confusing definition with algorithm. $a_1=1, a_{n+1}=a_n+1$ is a definition of a sequence. It is also an implicit algorithm for computing that sequence, but it is not the only algorithm that computes the $n$th element of that sequence. $\endgroup$ Commented Mar 11, 2013 at 17:12
  • 3
    $\begingroup$ What you've shown is that your algorithm is $O(n)$, not that any algorithm, when given an input of $n$, computes $a_n$, is $O(n)$. @CraigFeinstein $\endgroup$ Commented Mar 11, 2013 at 17:16
0
$\begingroup$

The fallacy is at a simpler level than anything about algorithms or runtime complexity. The point is that

a sequence that satisfies one definition can also satisfy many other definitions.

The alternative definitions might include ones that do not use recursion, or are more computationally efficient when converted to an algorithm. Therefore,

having one definition of a sequence does not indicate its complexity (in any sense of that word), only an upper bound to the complexity.

As an example, the sequence defined recursively by $$a_n = a_{n-1} + a_{n-2} - a_{n-3}$$ for $n>3$ and $a_1=a_2=a_3=1$, is constant. It takes an exponential number of operations to compute $a_n$ from that recursive definition, but the sequence also be computed from a different definition, $(\forall n )a_n = 1$.

Proving that different definitions specify the same sequence can be more logically complicated than applying either definition alone. Fermat's Last Theorem can be seen as an alternative definition of the sequence $0,0,0,.\dots$ but the demonstration involved a few thousand pages of theory.

$\endgroup$

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.