I've been playing around with this tutorial/sample code that demonstrates a simple implementation of light-pre-pass, which is a type of deferred lighting setup.
I'm in the process of implementing point light shadows, using dual-paraboloid shadow maps. I'm following this description of DPM: http://gamedevelop.eu/en/tutorials/dual-paraboloid-shadow-mapping.htm
I am able to create the shadow maps, and they seem to look fine.
I believe that the current problem I'm having is in my pixel shader that looks up a depth value in the shadow map when rendering point lights.
Here is my point light shader code: http://olhovsky.com/shadow_mapping/PointLight.fx
The pixel shader function of interest is PointLightMeshShadowPS.
Does anyone see a glaring error in that function?
Hopefully someone has tackled this problem before :)


As you can see in the images above, the post's shadows do not match up with the positions of the posts, so some transformation is wrong somewhere...
This is what it looks like when the point light is very close to the ground (almost touching the ground).

As the point light moves closer to the ground, the shadows come together and touch along the line where the two shadow maps meet (that is, along the plane where the light camera was flipped to capture the two shadow maps).
Edit:
Further information:

When I move the point light away from the origin, there is a line parallel to the light camera's "right" vector that clips the shadow. The above image shows the result of moving the point light to the left. If I move the point light to the right, there is an equivalent clipping line on the right instead. So I think this indicates that I'm transforming something incorrectly in the pixel shader, like I thought.
Edit: To make this question more clear, here are a few pieces of code.
Here is the code that I currently use to draw a shadowed spot light. This works, and uses shadow mapping as you'd expect.
VertexShaderOutputMeshBased SpotLightMeshVS(VertexShaderInput input) { VertexShaderOutputMeshBased output = (VertexShaderOutputMeshBased)0; output.Position = mul(input.Position, WorldViewProjection); //we will compute our texture coords based on pixel position further output.TexCoordScreenSpace = output.Position; return output; } ////////////////////////////////////////////////////// // Pixel shader to compute spot lights with shadows ////////////////////////////////////////////////////// float4 SpotLightMeshShadowPS(VertexShaderOutputMeshBased input) : COLOR0 { //as we are using a sphere mesh, we need to recompute each pixel position into texture space coords float2 screenPos = PostProjectionSpaceToScreenSpace(input.TexCoordScreenSpace) + GBufferPixelSize; //read the depth value float depthValue = tex2D(depthSampler, screenPos).r; //if depth value == 1, we can assume its a background value, so skip it //we need this only if we are using back-face culling on our light volumes. Otherwise, our z-buffer //will reject this pixel anyway //if depth value == 1, we can assume its a background value, so skip it clip(-depthValue + 0.9999f); // Reconstruct position from the depth value, the FOV, aspect and pixel position depthValue*=FarClip; //convert screenPos to [-1..1] range float3 pos = float3(TanAspect*(screenPos*2 - 1)*depthValue, -depthValue); //light direction from current pixel to current light float3 lDir = LightPosition - pos; //compute attenuation, 1 - saturate(d2/r2) float atten = ComputeAttenuation(lDir); // Convert normal back with the decoding function float4 normalMap = tex2D(normalSampler, screenPos); float3 normal = DecodeNormal(normalMap); lDir = normalize(lDir); // N dot L lighting term, attenuated float nl = saturate(dot(normal, lDir))*atten; //spot light cone half spotAtten = min(1,max(0,dot(lDir,LightDir) - SpotAngle)*SpotExponent); nl *= spotAtten; //reject pixels outside our radius or that are not facing the light clip(nl -0.00001f); //compute shadow attenuation float4 lightPosition = mul(mul(float4(pos,1),CameraTransform), MatLightViewProjSpot); // Find the position in the shadow map for this pixel float2 shadowTexCoord = 0.5 * lightPosition.xy / lightPosition.w + float2( 0.5, 0.5 ); shadowTexCoord.y = 1.0f - shadowTexCoord.y; //offset by the texel size shadowTexCoord += ShadowMapPixelSize; // Calculate the current pixel depth // The bias is used to prevent floating point errors float ourdepth = (lightPosition.z / lightPosition.w) - DepthBias; nl = ComputeShadowPCF7Linear(nl, shadowTexCoord, ourdepth); float4 finalColor; //As our position is relative to camera position, we dont need to use (ViewPosition - pos) here float3 camDir = normalize(pos); // Calculate specular term float3 h = normalize(reflect(lDir, normal)); float spec = nl*pow(saturate(dot(camDir, h)), normalMap.b*50); finalColor = float4(LightColor * nl, spec); //output light return finalColor * LightBufferScale; } Now here is the point light code that I'm using, which has some sort of bug in the transformation into light space when using the shadow maps:
VertexShaderOutputMeshBased PointLightMeshVS(VertexShaderInput input) { VertexShaderOutputMeshBased output = (VertexShaderOutputMeshBased)0; output.Position = mul(input.Position, WorldViewProjection); //we will compute our texture coords based on pixel position further output.TexCoordScreenSpace = output.Position; return output; } float4 PointLightMeshShadowPS(VertexShaderOutputMeshBased input) : COLOR0 { // as we are using a sphere mesh, we need to recompute each pixel position // into texture space coords float2 screenPos = PostProjectionSpaceToScreenSpace(input.TexCoordScreenSpace) + GBufferPixelSize; // read the depth value float depthValue = tex2D(depthSampler, screenPos).r; // if depth value == 1, we can assume its a background value, so skip it // we need this only if we are using back-face culling on our light volumes. // Otherwise, our z-buffer will reject this pixel anyway clip(-depthValue + 0.9999f); // Reconstruct position from the depth value, the FOV, aspect and pixel position depthValue *= FarClip; // convert screenPos to [-1..1] range float3 pos = float3(TanAspect*(screenPos*2 - 1)*depthValue, -depthValue); // light direction from current pixel to current light float3 lDir = LightPosition - pos; // compute attenuation, 1 - saturate(d2/r2) float atten = ComputeAttenuation(lDir); // Convert normal back with the decoding function float4 normalMap = tex2D(normalSampler, screenPos); float3 normal = DecodeNormal(normalMap); lDir = normalize(lDir); // N dot L lighting term, attenuated float nl = saturate(dot(normal, lDir))*atten; /* shadow stuff */ float4 lightPosition = mul(mul(float4(pos,1),CameraTransform), LightViewProj); //float4 lightPosition = mul(float4(pos,1), LightViewProj); float posLength = length(lightPosition); lightPosition /= posLength; float ourdepth = (posLength - NearClip) / (FarClip - NearClip) - DepthBias; //float ourdepth = (lightPosition.z / lightPosition.w) - DepthBias; if(lightPosition.z > 0.0f) { float2 vTexFront; vTexFront.x = (lightPosition.x / (1.0f + lightPosition.z)) * 0.5f + 0.5f; vTexFront.y = 1.0f - ((lightPosition.y / (1.0f + lightPosition.z)) * 0.5f + 0.5f); nl = ComputeShadow(FrontShadowMapSampler, nl, vTexFront, ourdepth); } else { // for the back the z has to be inverted float2 vTexBack; vTexBack.x = (lightPosition.x / (1.0f - lightPosition.z)) * 0.5f + 0.5f; vTexBack.y = 1.0f - ((lightPosition.y / (1.0f - lightPosition.z)) * 0.5f + 0.5f); nl = ComputeShadow(BackShadowMapSampler, nl, vTexBack, ourdepth); } /* shadow stuff */ // reject pixels outside our radius or that are not facing the light clip(nl - 0.00001f); float4 finalColor; //As our position is relative to camera position, we dont need to use (ViewPosition - pos) here float3 camDir = normalize(pos); // Calculate specular term float3 h = normalize(reflect(lDir, normal)); float spec = nl*pow(saturate(dot(camDir, h)), normalMap.b*100); finalColor = float4(LightColor * nl, spec); return finalColor * LightBufferScale; }