The last times i was searching a lot to understanding Big O notation or in general asymptotic notations concepts because i didnt hear about it or them before starting studying in computer science.
(In computer science) Since to get asymptotic notation of a function we must get firstly the function itself. And because functions can be sometimes complex therefore dificult to compare between each other, we find their asymptotic notations. So asymptotic notations are used because we don't want to deal with complex functions, and their purposes is only comparing between two or more algorithm's efficiency for large values of
nin a simplest way. Am i right?In math thoes notations can be used for n approaching to some real number a
[n-->a](not only to infinity), but in computer science is used only for large n (n--> approcah to infinity). Am i right?(In computer science) When we have
f(n)=O(g(n))this means thatfor all n>n0when we multiplyg(n)tosome constant cwe getf(n):f(n) = c * g(n). So my question is, thisconstant cis constant only for givennand will change for every different value ofn(case 1), orc constantis constant for all values ofnwhich therefore means that aftern0 pointf(n)graphic will be parallel tog(n)graphic (case 2)?
I asked this question to myself, my head give me this answer: "If case 1 is true c constant will not be a constant, and it will vary according to n, Else if case 2 is true, why some g(n) functions that is said to be big O of f(n) are not really parallel to f(n)." As you see there is where i was confused.