I want to calculate world coordinates from camera coordinates. However, I seem to have problems with my understandings of how matrices in HLSL work.
From world to camera is clear:
cameraPosition = mul(mul(worldPosition, view), projection); Logic would now say that for the reverse, I could just use something like
worldPosition = mul(mul(cameraPosition, invProjection), invView); However, when I check if it is correct with
cameraPosition = mul(mul(mul(mul(cameraPosition, invProjection), invView), view), projection); I don't get the same point back anymore.
The inverses should be fine as view * invView produces the identity matrix etc.
What is my misunderstanding here? Even the simpler case does not work:
void VS_test(in float4 inPosition : POSITION, out float4 outPosition : POSITION) { outPosition = inPosition; } produces the triangle I want. However, using
void VS_test(in float4 inPosition : POSITION, out float4 outPosition : POSITION) { outPosition = mul(mul(inPosition, view), invView); } already produces no visible triangle. Same with
void VS_test(in float4 inPosition : POSITION, out float4 outPosition : POSITION) { outPosition = mul(inPosition, mul(view, invView)); } Pixel shader is just a shader which returns a constant color.
UPDATE
I have 3D camera-space coordinates WITH z-buffer value, like (0, 0, zNear) for the point directly in the center of the screen. I want to know what world coordinates correspond to this by doing the whole view-transform backwards.