Over at this question about inversion countingthis question about inversion counting, I found a paperfound a paper that proves a lower bound on space complexity for all (exact) streaming algorithms. I have claimed that this bound extends to all linear time algorithms. This is a bit bold as in general, a linear time algorithm can jump around at will (random access) which a streaming algorithm can not; it has to investigate the elements in order. I may perform multiple passes, but only constantly many (for linear runtime).
Therefore my question:
Can every linear-time algorithm be expressed as a streaming algorithm with constantly many passes?
Random access seems to prevent a (simple) construction proving a positive answer, but I have not been able to come up with a counter example either.
Depending on the machine model, random access may not even be an issue, runtime-wise. I would be interested in answers for these models:
- Turing machine, flat input
- RAM, input as array
- RAM, input as linked list