Removing numbers greater than the target affects the correctness of this program - they could be added to negative numbers to reach the target. You could get a performance improvement by finding the difference between each element and the target, then looking to see if that difference is in the list. This doesn't in itself reduce the computational complexity (it's still O(_n_²)), but we can build on that: if the list is first sorted, and we then use a binary search to test membership, or if we convert to a structure with fast lookup (such as a `collections.Counter`), then we come down to less than O(_n_ log _n_) - probably very close to O(_n_). If we have a `Counter`, then we can account for all combinations of that pair by multiplying one count by the other (but we'll need to consider the special case that the number is exactly half the target). We could do with some auto tests. Consider importing the `doctest` module and using it. Some good test cases to include: 1, [] → 0 1, [1] → 0 1, [0, 1] → 1 0, [-1, 1] → 1 0, [0, 1] → 0 4, [1, 4, 3, 0] → 2 4, [1, 1, 3, 3] → 4 4, [2, 2, 2, 2] → 6