0
\$\begingroup\$

With regular raymarching algorithm we can fade the reflected pixel based on the number of iterations used when looping. I can’t use an iteration number when doing both hierarchical z (HiZ) and raymarching loop for refinement as the two loops did not work the same. Using a global iteration value produces incoherent fading.

To obtain a fading aspect that works for both I’m using this : SSRcolor = float4(Color,1-abs(SSray.z-D)*500);

Where color is the color of the scene (multilayer), SSray.z is the depth when the ray hits something in one layer, D is the initial depth from where the ray started. 500 accounts for the depth difference compared to 1. 500 gives good results for my needs but it can be more for more fading or less for less fading.

A problem I have when using an iteration or depth-based fading is that the fading ratio is POV dependent as shown in picture. For iterations I guess you are doing more loop rounds when closer and for Depth-based fading the closer you are the bigger SSray.z-D must be.

Is it possible to make the fading POV independent? Taking the constant 500 as a starting point can we design a formula for it taking into account the distance to POV or something like that.

enter image description here

\$\endgroup\$
4
  • 1
    \$\begingroup\$ Depth does not change with point of view. It does change with view angle, since it's only measured along the "forward" direction, unlike distance, which is the full length of the ray. Have you tried using ray length instead of depth for this purpose? \$\endgroup\$ Commented Oct 20, 2022 at 12:57
  • \$\begingroup\$ It works a little bit better with the length but the fading is still not constant upon moving. It's full opaque when I go far from the scene. Is it important to mention my raymarching is made in screen space? \$\endgroup\$ Commented Oct 20, 2022 at 15:57
  • \$\begingroup\$ after some try in either screen/view/world space the result with the length is almost the same. In between I have observed that it is better to have a raymarching step dependent on the position in view space to have better sampling for far objects to avoid the "sliced" look. Some work still need to find where is the fading problem. \$\endgroup\$ Commented Oct 21, 2022 at 8:47
  • \$\begingroup\$ View and World Space are in the same units, so of course the length is the same between them. \$\endgroup\$ Commented Oct 21, 2022 at 9:33

1 Answer 1

0
\$\begingroup\$

As stated in the comments using the length of the ray from start to the hit point was better working but not fully satisfying. What works the best for me is to make this length relative to the maximum expected length. Something like:

 float3 RayStart = float3(Input.TexCoord.xy, Depth); //I'm doing SSR in screen space float3 RayEnd = RayStart + RayDir*MaxDistance float MaxLength = length(RayEnd-RayStart); float3 RayHit = RayStart; do SSR stuff until RayHit find something float FadeAlpha = length(RayHit-RayStart)/MaxLength; float4 HitColor = float4(texture(sampler, RayHit.xy).rgb, FadeAlpha); 

In addition I had observed that far objects gets this "sliced" look as a well known SSR artifact if I use a constant RayStep, e.g. 0.02, to modulate the ray direction. This was solved by making the RayStep dependent on the position z of the starting point, like:

 get PosV as the starting position of the ray (in view space for me at this stage). RayStep = 3/PosV.z; //modulate the constant 3 depending on finer/coarser step you want. 
\$\endgroup\$

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.