0

I have written a function that takes a vector of ints, and returns the minimum difference between two elements (ie, the difference of the elements that are nearest to one another).

I am trying to work out the algorithmic complexity. The first for loop iterates over all elements of the input (I'll call this N), which is linear complexity: O(n). The inner for loop iterates over N - 1 the first time, then N - 2, N - 3, ... N - N + 2, N - N + 1, 0. So I don't think this algorithm is O(n^2). My guess is it is O(N * log(N)), is that correct?

#include <cstdlib> #include <vector> long int function(const std::vector<int> &Input) { long int MinimumDifference = -1; for(size_t i = 0; i < Input.size(); i++) { for(size_t j = i + 1; j < Input.size(); j++) { long int Difference = abs(static_cast<long int>(Input[i] - Input[j])); if(Difference < MinimumDifference || MinimumDifference < 0) { MinimumDifference = Difference; } } } return MinimumDifference; } 

Edit: So the above is O(n^2), think I can achieve O(N * log(N)) by sorting the list first (using std::sort, which has O(N * log(N))), and then just iterate over the list to find the smallest difference.

#include <cstdlib> #include <vector> £include <algorithm> void function(const std::vector<int> &Input) { std::vector<int> SortedInput = Input; std::sort (SortedInput.begin(), SortedInput.end()); long int MinimumDifference = -1; for(size_t i = 0; i < SortedInput.size() - 1; i++) { long int Difference = abs(static_cast<long int>(SortedInput[i] - SortedInput[i + 1])); if(Difference < MinimumDifference || MinimumDifference < 0) { MinimumDifference = Difference; } } return MinimumDifference; } 
2
  • 2
    Note that if you want to improve O, you could sort the list first, then do a single pass through to get the smallest difference. This could get you O(nLog(n)) (depending on sort algorithm used). Commented Nov 7, 2013 at 9:09
  • 1
    You have to realise that even when an algorithm is O(n^2), this doesn't necessarily mean there are precisely n^2 operations to be performed. Rather, it stands for c * n^2, where c can be any positive constant. In your case, c=1/2. Commented Nov 7, 2013 at 9:14

2 Answers 2

3

A little algebraic manipulation tells us that

(n-1) + (n-2) + ... + 1 = n * (n+1) / 2 = n^2 / 2 + O(n) 

so this algorithm's complexity is indeed O(n^2)

Sign up to request clarification or add additional context in comments.

4 Comments

funny, I would have used o(n^2) (something small compared to) instead of O(n) (something the size of). Just different mindset but I agree with answer (was typing mine at the same time)
@Bruce: mixing Big-O and little-o notation is a good way to confuse people, though. Especially for people whose handwriting is as small as mine.
AFAIK, n^2 + n = n^2 + o(n^2) as lim(n/n^2) = lim(1/n) = 0. if f(n)=O(y(n)) + o(y(n)) then f(n) = O(y(n)). On the contrary, if f(n)=O(y(n)) + O(x(n)), then you have to compare x and y to see which one weights most (which in your case is trivial as comparing n^2 to n is immediate)... Anyway, g'd answer and sorry for the comment spam.
@Bruce: actually, the tighter answer would be to use θ(n^2) since the complexity is bounded above and below by n^2. The definition of f(n) in o(g(n)) is that for every real k > 0, then |f(n)| < k . |g(n)| so it would be inaccurate here. wikipedia
0

No, you're imbricating two loops of the "roughly same size", your complexity is O(n^2/2) = O(n^2)

To convince yourself, think that sum(i) for i in [0,n] = n.(n+1)/2 = O(n^2/2) = O(n^2) which is exactly what your doing.

To do some "complexity for newbies", just count the number of imbricated loops. If they iterate on base element, multiply by n, if they reduce the problem complexity by an order (split problem in halves), multiply by log(n)

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.