6

Following example was taken from Cracking the coding interview (version 6) book. As per the book the time complexity of the following code is O(n^2 * n!). (Please refer the example 12. Page 32,33)

public static void main(String[] args) { new PermutationsTest().permutations("ABCD"); } void permutations(String string) { permutations(string, ""); } static int methodCallCount = 0; void permutations(String string, String perfix) { if (string.length() == 0) { System.out.println(perfix); } else { for (int i = 0; i < string.length(); i++) { String rem = string.substring(0, i) + string.substring(i + 1); permutations(rem, perfix + string.charAt(i)); } } System.out.format("Method call count %s \n", methodCallCount); methodCallCount += 1; } 

I am finding it difficult to understand how it was calculated. Following is my thoughts about it.

There can be n! arrangements. So there should be at least n! calls. However, for each call, roughly n times work happens. (as it need to loop through the passed string). So shouldn't the answer be O (n * n!)?

But what really happen is for each call the looping need to be done for (n-1) strings. So can we say it should be rather n! * n(n+1)/2

Pls explain..

3 Answers 3

5

There are n! possible strings, but each character that's added to the string requires:

String rem = string.substring(0, i) + string.substring(i + 1); permutations(rem, perfix + string.charAt(i)); 

The substring calls and the string concatenation are O(n). For each character in a string that would be O(n^2) and for all strings would be O(n^2 * n!).

EDIT:

I calculated the complexity to create a string via concatenation as being O(n^2) but multiplying by the number of strings is inaccurate are the strings share common prefixes so there's a lot of double counting there.

As the number of calls for the final strings is much more than for the rest of them, they dominate the complexity so they're the only ones that need to be counted. So I'm thinking we could reduce the complexity to O(n * n!).

Sign up to request clarification or add additional context in comments.

7 Comments

@fgb - Could you pls explain it little bit. That is the exact part that I find it difficult to understand. Why substring() calls O(n) ? I thought those api calls are O(1) as the time taken wouldn't change based on the length of the string. What about println() calls ?
@Codor That would be true if they were added rather than multiplied. But I think in this case, you can't simplify.
@UpulDoluweera Substring depends on the length of the string because it has to create a new string to return. The time is based on the length of this new string.
@Codor O(n^2 * n!) is not O(n!). That would mean that there's a c such that for large n, n^2*n! <= cn!. That's obviously false, since the n! cancel on both sides.
@PaulHankin You're right, it's my mistake; I'll delete the comment.
|
2

To get the asymptotic time complexity, you need to count how many times the permutations function is called and what is its asymptotic time complexity. The answer is product of these.

The string.length() = len decreases always with 1 in each iteration, so there is 1 call for len=n, n calls for len=n-1, n*(n-1) calls for len = n-2, ... , n! calls for len = 0. Hence, the total number of calls is:

n!/1! + n!/2! + n!/3! + n!/4! + .. + n!/n! = sum(k=1..n) n!/k! 

In asymptotic limit this can be calculated:

sum(k=1..n)( n!/k! ) = n! (-1 + sum(k=0..n) 1/k! (1^k)) -> n! (e^1 - 1) = (e-1)*n!, 

which is O((1-e)*n!) = O(n!). e is the Napier constant 2.71828128.. .To calculate the sum I used the Taylor series e^x = sum(k=0..infinity) 1/k! x^k atx=1.

Now for each call of the function there is the substring and concatenation operations:

String rem = string.substring(0, i) + string.substring(i + 1); 

This operation requires order of string.length operations as under the hood the String class needs to copy each character to a new String ( String.length - 1 number of operations). Therefore, the total complexity is the product of these two O(n*n!).

To check that the calls to perm behave as I said, I wrote a small c++ code for permutations (without the string operations so it should be O(n!) )`.

#include <iostream> #include <string> #include <iomanip> unsigned long long int permutations = 0; unsigned long long int calls = 0; void perm(std::string& str, size_t index){ ++calls; if (index == str.size()-1){ ++permutations; // std::cout << count << " " << str << std::endl; } else{ for (size_t i=index; i < str.size();++i){ std::swap(str[index],str[i]); perm(str,index+1); std::swap(str[index],str[i]); } } } int main(){ std::string alpha="abcdefghijklmnopqrstuvwxyz"; std::cout << std::setprecision(10); for (size_t i=1;i<alpha.size()+1;++i){ permutations = calls = 0; std::string str(alpha.begin(),alpha.begin()+i); perm(str,0); std::cout << i << " " << permutations << " " << calls << " " << static_cast<double>(calls)/permutations << std::endl; } } 

Output:

1 1 1 1 2 2 3 1.5 3 6 10 1.666666667 4 24 41 1.708333333 5 120 206 1.716666667 6 720 1237 1.718055556 7 5040 8660 1.718253968 8 40320 69281 1.71827877 9 362880 623530 1.718281526 10 3628800 6235301 1.718281801 11 39916800 68588312 1.718281826 12 479001600 823059745 1.718281828 13 6227020800 10699776686 1.718281828 14 took too long 

The columns are: length of the string = n , n!, sum(k=1..n) n!/k!, ratio of third and second column, which should be (e-1)=1.71828182845905. So it seems to converge rather fast to the asymptotic limit.

Comments

1

I'm afraid the book is mistaken. The time complexity is ϴ(n!n), as has been correctly conjectured in fgb's answer.

Here is why:

As always with recursive functions, we first write down the recurrence relation. In this case, we have to inputs, string and perfix [sic!]. Let's denote their lenghts by s and p, respectively:

T(0,p) = p // println T(s,p) = s * // for (int i = 0; i < string.length(); i++) (O(s + // String rem = string.substring(0, i) + string.substring(i + 1); p) + // perfix + string.charAt(i) T(s-1,p+1)) // permutations(rem, perfix + string.charAt(i)); = s*T(s-1,p+1) + O(s(s+p)) 

However, note that

  1. s+p always stays constant, namely it is k, the original length of the string string.
  2. when s has counted down to 0, p also has length k.

So for a particular k, we can rewrite the recurrence relation like this:

T_k(0) = k T_k(s) = s*T(s-1) + O(ks) 

A good rule to memorize is that recurrence relations of the form

T(n) = n * T(n-1) + f(n) 

have the general solution

T(n) = n! (T(0) + Sum { f(i)/i!, for i=1..n }) 

Applying this rule here yields the exact solution

T_k(s) = s! (k + Sum { ki/i!, for i=1..s }) = s!k (1 + Sum { 1/(i-1)!, for i=1..s }) 

Now recall that k is the original length of the string string, so we are actually just interested in the case k = s, hence we can write down the final exact solution for this case as

T(s) = s!s (1 + Sum { 1/(i-1)!, for i=1..s }) 

Since the series Sum { 1/(i-1)!, for i=1..infinity } converges, we finally have

T(n) = ϴ(n!n), qed 

2 Comments

The work done for the string concatenation depends on the original length of the string, and not the n at each recursion level because prefix is increasing in length as string is decreasing so the total length stays the same. At the lowest level, there is n work done for each of the n! permutations so the complexity should be at least n*n!
@fgb You were right, I've edited the proof.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.