Remember, that all asymptotic analysis tells us is how the running time varies with the size of the input for a particular algorithm. It doesn't tell us how many seconds it will take for an implementation of the algorithm to complete with a given input size. But we do know that if an O(n2</sup)) algorithm takes X seconds to process a set of items, it will take approximately 100 times as long to process 10 times as many items.
Asymptotic notation eliminates constant factors because they're not really relevant to how the running time grows with the size of input. When you're talking about using an O(n2) algorithm on 1,000,000 items, it's unlikely that your constant factor is going to make much difference; the n^2 term is going to dwarf it.
You can, however, estimate the constant factors. One way is by counting "basic operations" to come up with a rough estimate of how much work is done per iteration. Or, you could run both algorithms with some sample input, count the iterations, and figure out how many microseconds it takes per iteration. Either way, you'll come up with a value that is the constant for that particular algorithm.
So, say you want to compare insertion sort (an O(n2) algorithm with Quick sort (O(n log n)). We know from empirical evidence that insertion sort is faster than Quick sort when n is small (how small is another question). That's why optimized Quick sort implementations switch to insertion sort when the subarray is less than 10 or so items.
Using one of the techniques I described above, you determine the constants for each of the algorithms. Let's call them Ci and Cq. Given that, we can estimate that insertion sort will be faster than Quick sort when (Ci * n2) <(Cq*(n log n)).
2)) algorithm takes X seconds to process a set of items, it will take approximately 100 times as long to process 10 times as many items. Asymptotic notation eliminates constant factors because they're not really relevant to how the running time grows with the size of input. When you're talking about using an O(n2) algorithm on 1,000,000 items, it's unlikely that your constant factor is going to make much difference; the n^2 term is going to dwarf it.
You can, however, estimate the constant factors. One way is by counting "basic operations" to come up with a rough estimate of how much work is done per iteration. Or, you could run both algorithms with some sample input, count the iterations, and figure out how many microseconds it takes per iteration. Either way, you'll come up with a value that is the constant for that particular algorithm.
So, say you want to compare insertion sort (an O(n2) algorithm with Quick sort (O(n log n)). We know from empirical evidence that insertion sort is faster than Quick sort when n is small (how small is another question). That's why optimized Quick sort implementations switch to insertion sort when the subarray is less than 10 or so items.
Using one of the techniques I described above, you determine the constants for each of the algorithms. Let's call them Ci and Cq. Given that, we can estimate that insertion sort will be faster than Quick sort when (Ci * n2) < (Cq*(n log n)).