I've been reading Brent's original paper and struggling with his worst-case bounds derivation. (I won't even bother with the expected behavior analysis.) I get the Floyd's algorithm analysis as that's straightforward.
For simplicity, take $q = 2, u = 0$ as in practice. This is the algorithm in pseudocode, without multiple statements compressed into one line:
y = x0 r = 1 k = 0 done = False while not done: x = y j = k r *= 2 while not (done or k >= r): k += 1 y = f(y) done = (x == y) n = k - j Intuitively after each outer loop iteration, the values after the inner loop exits are
- $0 = j < k \le r = 2$
- $2 = j < k \le r = 4$
- $4 = j < k \le r = 8$
- etc.
so eq 3.3, $j = 2^{s-1} [s > 1]$, and eq 3.4, $k = j+n \le 2^s$, make perfect sense.
I don't understand the derivations from 3.7 to 3.10. Given in eq 3.5 that $\bar s $ is the smallest positive integer such that $2^{\bar s-1} \ge \max(m, n+1)$, eq 3.6 says $2^{\bar s} - 2^{\bar s - 1} - 1 \ge n$. Fine. But how does he conclude $s \le \bar s$?
If I compare $k \le 2^s$ with $2^{\bar s}$, and $j = 2^{s-1}$ with $2^{\bar s-1}$, which is what I guess is implied, I can only get $2^s - 2^{s-1} \ge k - j = n$. So both $2^{\bar s - 1} - 1$ and $2^{s-1}$ are $\ge n$, but it doesn't tell me about how $s$ compares to $\bar s$. If the second inequality were the other way around with $2^{s-1} \le n$, then it would make sense to say $s \le \bar s$.
I also don't understand the leap of logic to eq 3.7. The best I can come up with is an argument from contradiction of eq 3.7: suppose $j > 2 \max(m, n+1)$. Then $j/2 = 2^{(s-1)-1} \ge \max(m, n+1)$, so $s-1$ satisfies eq 3.5, which contradicts the least property of $\bar s$ in eq 3.5. But this seems to be a roundabout way I'm not sure of, when it seems like there should a simpler approach.