There is a naive semidecision algorithm for the halting problem: simply simulate the program and see if it halts. Is there an algorithm that is asymptotically faster? More precisely, suppose the naive program halts in $f(M)$ steps, given a halting input $M$, is there a semidecision algorithm that halts in $o(f(M))$ steps, as the size of $M$ tends to infinity?
First, it is trivially possible to construct a program that detects an infinite family of halting machines faster: for example, we test if the source code is $\texttt{for i in range(1,}n\texttt{): print("a")}$, where $n$ is a numeral. Then we immediately say this program halts. So we are requiring a strict speedup for all families of programs eventually.
Without specifying the naive algorithm, we can still expect cases where the "naive" algorithm is slower. For instance if the naive algorithm waits for $t^2$ steps on purpose after using $t$ steps to determine that the input halts. But I would be very surprised if this still happens when we just construct an obvious algorithm in each specific setup without maliciously slowing it down. For example, if we use some less efficient parsing algorithm, then it at most introduces some exponential slowdown with respect to the size of $M$, which will be dwarfed by $f(M)$. With this in mind, I wouldn't actually try to write down the naive algorithm, but if I'm wrong then feel free to specify the naive algorithm.
My first instinct is that, we can suppose for contradiction that there is a faster algorithm that halts in $g(M)$ steps, where $g(M) < f(M)$ when the size of $M$ is greater than $N$. Here $N$ is a known constant, so we might somehow exploit this to decide the halting problem, similar to how we prove that $BB$ is non-computable. But this doesn't seem to work out in the end. We might also try to iterate the faster algorithm on itself to somehow semidecide faster and faster. This also doesn't seem to lead to contradictions.