0
$\begingroup$

I have a scene with a floating cube and a plane y=0. I want to create a simple planar shadow of the cube on the plane. To do this, I just have to project the vertices of the cube to the plane. I can do that with a matrix - Many sources (such as Real Time Rendering) use the exact matrix below. I understand the derivation of it, and when trying a bunch of points manually it always correctly projects them onto the plane.

\begin{bmatrix} \vec{n} \cdot \vec{l} + d - n_x l_x & - n_y l_x & - n_z l_x & - d l_x \\ -n_x l_y & \vec{n} \cdot \vec{l} + d - n_y l_y & - n_z l_y & - d l_y \\ -n_x l_z & - n_y l_z & \vec{n} \cdot \vec{l} + d - n_z l_z & - d l_z \\ -n_x & - n_y & - n_z & \vec{n} \cdot \vec{l} \end{bmatrix}

There's also the more simplified version when the plane is just y=0, since its normal is [0, 1, 0]:

\begin{bmatrix} l_y & -l_x & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & -l_z & l_y & 0 \\ 0 & -1 & 0 & l_y \end{bmatrix}

However, both of these matrices wield the same incorrect result in my WebGPU code. This is the result when light vector is $$ \vec{l} = \begin{bmatrix}1 & 1 & 0\end{bmatrix}' : $$

Incorrectly projected shadow

The vertices don't get correctly projected.

On the other hand, I found this website which features a different shadow matrix:

\begin{bmatrix} \vec{n} \cdot \vec{l} - n_x l_x & -n_x l_y & -n_x l_z & -n_x l_w \\ -n_y l_x & \vec{n} \cdot \vec{l} - n_y l_y & -n_y l_z & -n_y l_w \\ -n_z l_x & -n_z l_y & \vec{n} \cdot \vec{l} - n_z l_z & -n_z l_w \\ -n_w l_x & -n_w l_y & -n_w l_z & \vec{n} \cdot \vec{l} - n_w l_w \end{bmatrix}

On the website, this matrix is not explained, saying the algorithm for it is "well known", despite the matrix being different from the one above. But it projects the points perfectly!

Correctly projected shadow

I'm trying to wrap my head around why the first matrix doesn't work, despite it being objectively correct, and why the second matrix does. Can anyone explain?

This is my vertex shader code for reference, I apply shadow transform after model transform but before view and projection transforms.

vout.position = view_proj.m * shadow_transform.m * node_transform.m * float4(vin.position, 1.0); 
$\endgroup$

1 Answer 1

0
$\begingroup$

The issue isn't the matrix, it's the row order. In WebGPU, matrices are stored in column-major order, meaning that first elements in the array are actually in the first column of the matrix, not the first row.

I created the first matrix with this code:

var shadow_matrix = new Float32Array([ l[1], -l[0], 0, 0, 0, 0, 0, 0, 0, -l[2], l[1], 0, 0, -1, 0, l[1] ]); 

but because of the column-major order, this actually meant that the matrix in the shader looked like this: \begin{bmatrix} l_y & 0 & 0 & 0 \\ -l_x & 0 & -l_z & -1 \\ 0 & 0 & l_y & 0 \\ 0 & 0 & 0 & l_y \end{bmatrix}

The second matrix, however, was written the correct way:

shadow_matrix[0] = dotNL - l[0] * n[0]; shadow_matrix[4] = -l[0] * n[1]; shadow_matrix[8] = -l[0] * n[2]; shadow_matrix[12] = -l[0] * n[3]; //etc 

which meant that it also worked correctly. Additionally, in the first matrix, $l_w$ is not present as it is by default equal to 1. It seems that the second matrix is refined so it can calculate both point lights and directional lights - if $l_w = 0$, the light is directional, and if $l_w = 1$, it's a point light.

$\endgroup$

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.