The cross product in 3 dimensions has the following properties:
- There are exactly 2 inputs (as with most products in math).
- The length of the output vector is the area of the parallelogram spanned by the input vectors.
- The orientation of the output vector is a unique vector perpendicular to the inputs, following the convention that the cross product of +X and +Y gives +Z.
If you want to have all 3 of these properties, this only exists in 3D. In 4D and above, there is a continuum of perpendicular vectors to two inputs, so there is no way to decide which one. There is also a 7D version where there is a way to choose just one particular vector from 2 inputs, but that is a special case.
Game engines often contain a 2D cross function: a.x * b.y - a.y * b.x. This operation preserves the first two properties. However, instead of the third property where the output is a perpendicular vector, the output is a single number. Instead of the orientation of a vector, you get the sign of a number being positive or negative. This must be the case because you cannot have 3 perpendicular vectors in a 2 dimensional space. However, you could think of the output number as being in a "Z axis" perpendicular to the 2 dimensional input space, in which case it does conceptually match the 3D cross product.
The 2D cross function, giving the area of a parallelogram spanned by two input vectors, can generalize to N-dimensions with the following function. However, this completely loses the orientation property, since the output is not the signed area, but is just the absolute area.
double Vector4D::cross(const Vector4 &p_a, const Vector4 &p_b) { const double diagonal = p_a.length_squared() * p_b.length_squared(); const double non_diagonal = p_a.dot(p_b); return Math::sqrt(diagonal - non_diagonal * non_diagonal); } There is also another way to generalize the cross product. Instead of keeping the first two properties, we can instead keep the last two properties. I call this function perpendicular in my code. In N dimensions, if we have N-1 input vectors, we can find a new vector perpendicular to all inputs. This will be a unique vector, and its length will be the volume of the parallelepiped spanned by the inputs (or higher dimensional equivalent). Here is a version of the function specific to 4 dimensions:
Vector4 Vector4D::perpendicular(const Vector4 &p_a, const Vector4 &p_b, const Vector4 &p_c) { Vector4 perp; perp.x = - p_a.y * (p_b.z * p_c.w - p_b.w * p_c.z) + p_a.z * (p_b.y * p_c.w - p_b.w * p_c.y) - p_a.w * (p_b.y * p_c.z - p_b.z * p_c.y); perp.y = + p_a.x * (p_b.z * p_c.w - p_b.w * p_c.z) - p_a.z * (p_b.x * p_c.w - p_b.w * p_c.x) + p_a.w * (p_b.x * p_c.z - p_b.z * p_c.x); perp.z = - p_a.x * (p_b.y * p_c.w - p_b.w * p_c.y) + p_a.y * (p_b.x * p_c.w - p_b.w * p_c.x) - p_a.w * (p_b.x * p_c.y - p_b.y * p_c.x); perp.w = + p_a.x * (p_b.y * p_c.z - p_b.z * p_c.y) - p_a.y * (p_b.x * p_c.z - p_b.z * p_c.x) + p_a.z * (p_b.x * p_c.y - p_b.y * p_c.x); return perp; } The algorithm you found does this. I tested it myself and it works as expected. However, it has time complexity proportional to N factorial, and becomes extremely slow beyond 10 dimensions. I found (discovered?) a different algorithm that has time complexity proportional to N^4. I'm betting there is still something better, but this algorithm only starts becoming extremely slow beyond about 100 dimensions, which is a big improvement.
VectorN VectorND::perpendicular(const Vector<VectorN> &p_input_vectors) { // Handle edge cases and determine if the input is valid. ERR_FAIL_COND_V_MSG(p_input_vectors.is_empty(), VectorN(), "VectorND.perpendicular: Cannot compute a vector perpendicular to nothing."); const int64_t count = p_input_vectors.size(); const int64_t dimension = p_input_vectors[0].size(); ERR_FAIL_COND_V_MSG(count != dimension - 1, VectorN(), "VectorND.perpendicular: Expected exactly N-1 vectors for N-dimensional space."); for (int64_t input_vec_index = 1; input_vec_index < count; input_vec_index++) { const VectorN input_vector = p_input_vectors[input_vec_index]; ERR_FAIL_COND_V_MSG(input_vector.size() != dimension, VectorN(), "VectorND.perpendicular: All input vectors must have the same dimension."); } if (dimension > 100) { WARN_PRINT("VectorND.perpendicular: Calculating a perpendicular vector in " + itos(dimension) + "-dimensional space will be very slow."); } // Allocate the result vector and a matrix to perform the intermediate calculations. VectorN result = VectorND::fill(0.0, dimension); const int64_t sub_size = count; // == dimension - 1 Vector<VectorN> sub_matrix = VectorND::fill_array(0.0, sub_size, sub_size); // Flip the entire result if dimension is even. const bool global_parity = (dimension % 2 == 0); // This algorithm was found by ChatGPT o4-mini-high deep research and manually tweaked. It is likely suboptimal. for (int64_t dimension_index = 0; dimension_index < dimension; dimension_index++) { // Build the (N-1)x(N-1) submatrix omitting column `dimension_index`. // The naming convention of rows vs columns is for matching the typical // Gaussian elimination algorithm, but the actual math works either way. for (int64_t row_index = 0; row_index < sub_size; row_index++) { int64_t row_col_index = 0; VectorN row = sub_matrix[row_index]; for (int64_t column_index = 0; column_index < dimension; column_index++) { if (column_index == dimension_index) { continue; } row.set(row_col_index++, p_input_vectors[row_index][column_index]); } sub_matrix.set(row_index, row); } // Compute det(sub_matrix) via Gaussian elimination. double det = 1.0; bool pivot_parity = dimension_index % 2; for (int64_t pivot_index = 0; pivot_index < sub_size; pivot_index++) { // Find the pivot row. int64_t pivot = pivot_index; while (pivot < sub_size && sub_matrix[pivot][pivot_index] == 0.0) { pivot++; } if (pivot == sub_size) { det = 0.0; break; } if (pivot != pivot_index) { VectorN tmp = sub_matrix[pivot_index]; sub_matrix.set(pivot_index, sub_matrix[pivot]); sub_matrix.set(pivot, tmp); pivot_parity = !pivot_parity; } const double pivot_val = sub_matrix[pivot_index][pivot_index]; if (pivot_val == 0.0) { det = 0.0; break; } for (int64_t row_index = pivot_index + 1; row_index < sub_size; row_index++) { const double factor = sub_matrix[row_index][pivot_index] / pivot_val; VectorN mat_row = sub_matrix[row_index]; for (int64_t column_index = pivot_index; column_index < sub_size; column_index++) { mat_row.set(column_index, mat_row[column_index] - factor * sub_matrix[pivot_index][column_index]); } sub_matrix.set(row_index, mat_row); } } if (det != 0.0) { for (int64_t diagonal_index = 0; diagonal_index < sub_size; diagonal_index++) { det *= sub_matrix[diagonal_index][diagonal_index]; } } // Cofactor sign and global flip. const bool parity_sign_flip = global_parity ^ pivot_parity; const double cofactor = parity_sign_flip ? -det : det; result.set(dimension_index, cofactor); } return result; } Calling either of these things "the" cross product will make mathematicians yell at you, because these generalizations do not preserve all 3 properties at the same time. However, even if people disagree on the names, the calculations themselves are generally useful.