I have a problem that my professor gave us:
Algorithms A and B spend exactly TA(n) = 0.1n^2log10(n) and TB(n) = 2.5n^2 microseconds, respectively, for a problem of size n. Choose the algorithm, which is better in the Big-Oh sense, and find out a problem size n0 such that for any larger size n > n0 the chosen algorithm outperforms the other. If your problems are of the size n ≤ 10^9, which algorithm will you recommend to use?
At first, I thought algorithm A would be better in the Big-Oh sense, but his answer says A is better. My reasoning for A being better is that it grows slower than B. Am I correct or is my professor correct?
Here's his answer:
In the Big-Oh sense, the algorithm B is better. It outperforms the algorithm A when TB(n) ≤ TA(n), that is, when 2.5n^2 ≤ 0.1n^2log10(n). This inequality reduces to log10(n) ≥ 25, or n ≥ n0 = 1025. If n≤ 10^9, the algorithm of choice is A
Is he correct or am I correct?