When we look across, we look at e.g. first bits of vectors making a particular plane. These bits represent a 1-dimensional bit vector. We can eliminate most combinations of its bits from symmetry reasons, as described above. So if we pick a particular 1-d vector of a cube (e.g. leftmost topmost), we can eliminate many vectors of the same plane (e.g. top) based on the value of a particular bit. So wefor a vector in a mirror-symmetrical position on the plane, we can avidavoid generating all adjacent vectors thatcombinations might have that bit set (or unset), reducing the number of vectors we have to generate for a particular plane drastically: each. Each eliminated bit halves the search spacenumber of possible vectors at the mirror-reflected position. This gives us a sequence of planes without symmetrical counterparts along each dimension.
When we look across, we look at e.g. first bits of vectors making a particular plane. These bits represent a 1-dimensional bit vector. We can eliminate most combinations of its bits from symmetry reasons, as described above. So if we pick a particular 1-d vector of a cube (e.g. leftmost topmost), we can eliminate many vectors of the same plane (e.g. top) based on the value of a particular bit. So we can avid generating all adjacent vectors that might have that bit set (or unset), reducing the number of vectors we have to generate for a particular plane drastically: each eliminated bit halves the search space. This gives us a sequence of planes without symmetrical counterparts along each dimension.
When we look across, we look at e.g. first bits of vectors making a particular plane. These bits represent a 1-dimensional bit vector. We can eliminate most combinations of its bits from symmetry reasons, as described above. So if we pick a particular 1-d vector of a cube (e.g. leftmost topmost), we can eliminate many vectors of the same plane (e.g. top) based on the value of a particular bit. So for a vector in a mirror-symmetrical position on the plane, we can avoid generating all combinations might have that bit set (or unset), reducing the number of vectors we have to generate for a particular plane drastically. Each eliminated bit halves the number of possible vectors at the mirror-reflected position. This gives us a sequence of planes without symmetrical counterparts along each dimension.
Note: I think only about mirror symmetries, not rotational symmetries here.
Let's suppose we have a (hyper) cube of d dimensions, each n unit long (a Rubik's cube would be d=3, n=3).
A naive algorithm would generate n^d combinations of points, and check each for a symmetry clash with all others.
If we represent a combination of points as a bit vector n^d bits long, we can use a map (bit vector -> boolean) to mark all the symmetries of the bit vector with true. We could then skip a combination if it's already marked in the map.
This approach is very space-inefficient: it takes a map with 2^(n^d) entries, that is, a bitmap with this many bits. (For Rubik's cube, it would be 2^27 = 128Mbit = 16 Mbytes.)
We can only remember the canonical representations, that is, such bit vectors that have the smallest integer value, if represented as a n^d-bit unsigned word. When we generate a new permutation of points, we generate all its symmetries, and only check if we've seen the symmetry with the smallest numerical value. This will allow us to store a map wit only 2^n bits (just 1 byte for the Rubik's cube), because we have 2^d symmetries. It makes us generate these 2^d symmetries on each step, though, so we spend O(2^(d^n + d)) = O(2^(d^n) * 2^d) time. Still poor.
We can apply the idea from the previous paragraph to the 1-dimensional case. To generate all combinations in a vector of length d, we can just increment a binary number d bits long, starting from all 0s. Let's divide our vector to two d/2-long segments, e.g. left and right. We can notice that for each 1 bit in the left segment we only need to see combinations that have 1 bit in the symmetric position of the right section. Otherwise we would have already generated a symmetric combination earlier, when the positions of the bits were swapped, and the 0 came before the 1. This way, for every bit position in the right half (r), and the symmetric position in the left half (l) we only need to generate 3 combinations: (l = 0, r = 0); (l = 1, r = 1); (l = 1, r = 0). Thus we only need to generate 2^(d/2) permutations of a vector of length d, yielding 3 combinations for each permutation.
A cube of d dimensions can be constructed of n^(d-1) vectors. The trick above gives us the vectors cheaper than the naive approach. To generate a cube, we need O(n^(d-1) * 2^(d/2)) time.
If we look at the cube along the dimension of our 1-dimensional vectors, we can see that we don't have to check for symmetry along this dimension: while generating the cubes, we eliminate symmetries for each involved vector.
Now if we look across this dimension, we can reuse the same trick.
When we look across, we look at e.g. first bits of vectors making a particular plane. These bits represent a 1-dimensional bit vector. We can eliminate most combinations of its bits from symmetry reasons, as described above. So if we pick a particular 1-d vector of a cube (e.g. leftmost topmost), we can eliminate many vectors of the same plane (e.g. top) based on the value of a particular bit. So we can avid generating all adjacent vectors that might have that bit set (or unset), reducing the number of vectors we have to generate for a particular plane drastically: each eliminated bit halves the search space. This gives us a sequence of planes without symmetrical counterparts along each dimension.
This trick can be applied to further limit generation of permutations of the following planes along a third dimension, etc.
Though not a complete algorithm, I hope this helps.