I'm trying to understand the computational complexity of this pseudocode:
values is a set of n unique elements subset is an empty set for 0 ... k X: randomly select a value from values if value is in subset goto X else insert value into subset This is a (poor) algorithm for selecting a unique random subset of k elements from n, and I'm aware of the better choices, but I wanted to understand the computational complexity of this. I was told it was O(n!).
I can see easily that this is O(n) when duplicates are allowed because the conditional test is eliminated from the pseudocode and k choices are made each time.
When you have to account for duplicates, there is a probability that a re-test will be required which increases with each iteration. Depending on the values of n and k, this is a non-negligible fact, but I'm not certain how it affects the big-O complexity in a generalized way. Could someone explain this to me?