I am building a deferred rendering engine and I have a question. The article I took the sample code from suggested computing screen position of the pixel as follows:
VertexShaderFunction() { ... output.Position = mul(worldViewProj, input.Position); output.ScreenPosition = output.Position; } PixelShaderFunction() { input.ScreenPosition.xy /= input.ScreenPosition.w; float2 TexCoord = 0.5f * (float2(input.ScreenPosition.x,-input.ScreenPosition.y) + 1); ... } The question is what if I compute the position in the vertex shader (which should optimize the performance as VSF is launched significantly less number of times than PSF) would I get the per-vertex lighting insted. Here is how I want to do this:
VertexShaderFunction() { ... output.Position = mul(worldViewProj, input.Position); output.ScreenPosition.xy = output.Position / output.Position.w; } PixelShaderFunction() { float2 TexCoord = 0.5f * (float2(input.ScreenPosition.x,-input.ScreenPosition.y) + 1); ... } What exactly happens with the data I pass from VS to PS? How exactly is it interpolated? Will it give me the right per-pixel result in this case? I tried launching the game both ways and saw no visual difference. Is my assumption right?
Thanks.
P.S. I am optimizing the point light shader, so I actually pass a sphere geometry into the VS.
Solution:
struct PSInput { float2 vPos : VPOS; }; PixelShaderFunction(PSInput input) { float2 ScrPos = input.vPos*halfPixel*2; //The correct Screen Space Texture Coordinates. float2 TexCoord = ScrPos + halfPixel; //Bonus: getting the world position. ScrPos = ScrPos*2 - 1; ScrPos = float2(ScrPos.x, -ScrPos.y); float4 position; position.xy = ScrPos; position.z = depthVal; // Read from the depth map. position.w = 1.0f; position = mul(position, InvertViewProjection); position /= position.w; }