Skip to main content
added 346 characters in body
Source Link
gallickgunner
  • 2.6k
  • 1
  • 14
  • 36

enter image description here

So I just thought of comparing the results from a Naive Path tracer and one using Next Event Estimation aka Explicit Direct Light Sampling. However the results from the Naive PT are very dark.

Is this expected? The one with next event estimation produces brighter images like in my previous question which was related to MIS

I thought that a Naive PT would produce images with the same brightness but more noise and it would take longer to converge. The main reason for dark images I think is because I'm averaging the colors obtained from the previous run with the next run. On each run I shoot 1 sample per pixel.

This means if I take 20 samples (20 passes) and only 2 samples ($S_1$ and $S_2$) are useful (hit the light in the end) and 18 return 0 since they don't hit the light, I'd be doing,

$\displaystyle{\frac{S_1 + S_2 + 0*18}{20}}$

The brighter colors will surely get averaged out. Moreover they won't converge or in other words they won't get brighter since the number of useless samples surely outweigh the number of useful samples. However in every implementation I could find over the net, people averaged it simply like I did. So is that actually a problem or is it expected?

I stumbled upon this question which is pretty much the same thing OP is asking but in the end they find problem with the raytracer and not the path tracer. I don't understand.

If you guys are interested in the code, here is my kernel. The main functions to look for are shading() which contains the whole bouncing around portion. EvaluateBRDF simply evaluates the blinn phong model. The averaging out is done in the main kernel evaluatePixelRadiance in the end.

UPDATE:- So I corrected what Stefan pointed out and removed color clamping. I was using an 8bit depth default framebuffer and RBO of 32 bit floating points. So I guess the data gets clamped automatically when blitting to the default framebuffer. And it seems to have solved the issue. I'll update the results shortly.

I think this solved my MIS issue as well, will share results on MIS post soon. Here is the one for naive-PT, 5000 samples in around 10 seconds. And it seems I was not getting fireflies since i was clamping radiance as expected. Good to learn new things :)

enter image description here

enter image description here

So I just thought of comparing the results from a Naive Path tracer and one using Next Event Estimation aka Explicit Direct Light Sampling. However the results from the Naive PT are very dark.

Is this expected? The one with next event estimation produces brighter images like in my previous question which was related to MIS

I thought that a Naive PT would produce images with the same brightness but more noise and it would take longer to converge. The main reason for dark images is because I'm averaging the colors obtained from the previous run with the next run. On each run I shoot 1 sample per pixel.

This means if I take 20 samples (20 passes) and only 2 samples ($S_1$ and $S_2$) are useful (hit the light in the end) and 18 return 0 since they don't hit the light, I'd be doing,

$\displaystyle{\frac{S_1 + S_2 + 0*18}{20}}$

The brighter colors will surely get averaged out. Moreover they won't converge or in other words they won't get brighter since the number of useless samples surely outweigh the number of useful samples. However in every implementation I could find over the net, people averaged it simply like I did. So is that actually a problem or is it expected?

I stumbled upon this question which is pretty much the same thing OP is asking but in the end they find problem with the raytracer and not the path tracer. I don't understand.

If you guys are interested in the code, here is my kernel. The main functions to look for are shading() which contains the whole bouncing around portion. EvaluateBRDF simply evaluates the blinn phong model. The averaging out is done in the main kernel evaluatePixelRadiance in the end.

UPDATE:- So I corrected what Stefan pointed out and removed color clamping. I was using an 8bit depth default framebuffer and RBO of 32 bit floating points. So I guess the data gets clamped automatically when blitting to the default framebuffer. And it seems to have solved the issue. I'll update the results shortly.

enter image description here

So I just thought of comparing the results from a Naive Path tracer and one using Next Event Estimation aka Explicit Direct Light Sampling. However the results from the Naive PT are very dark.

Is this expected? The one with next event estimation produces brighter images like in my previous question which was related to MIS

I thought that a Naive PT would produce images with the same brightness but more noise and it would take longer to converge. The main reason for dark images I think is because I'm averaging the colors obtained from the previous run with the next run. On each run I shoot 1 sample per pixel.

This means if I take 20 samples (20 passes) and only 2 samples ($S_1$ and $S_2$) are useful (hit the light in the end) and 18 return 0 since they don't hit the light, I'd be doing,

$\displaystyle{\frac{S_1 + S_2 + 0*18}{20}}$

The brighter colors will surely get averaged out. Moreover they won't converge or in other words they won't get brighter since the number of useless samples surely outweigh the number of useful samples. However in every implementation I could find over the net, people averaged it simply like I did. So is that actually a problem or is it expected?

I stumbled upon this question which is pretty much the same thing OP is asking but in the end they find problem with the raytracer and not the path tracer. I don't understand.

If you guys are interested in the code, here is my kernel. The main functions to look for are shading() which contains the whole bouncing around portion. EvaluateBRDF simply evaluates the blinn phong model. The averaging out is done in the main kernel evaluatePixelRadiance in the end.

UPDATE:- So I corrected what Stefan pointed out and removed color clamping. I was using an 8bit depth default framebuffer and RBO of 32 bit floating points. So I guess the data gets clamped automatically when blitting to the default framebuffer. And it seems to have solved the issue. I'll update the results shortly.

I think this solved my MIS issue as well, will share results on MIS post soon. Here is the one for naive-PT, 5000 samples in around 10 seconds. And it seems I was not getting fireflies since i was clamping radiance as expected. Good to learn new things :)

enter image description here

added 375 characters in body
Source Link
gallickgunner
  • 2.6k
  • 1
  • 14
  • 36

enter image description here

So I just thought of comparing the results from a Naive Path tracer and one using Next Event Estimation aka Explicit Direct Light Sampling. However the results from the Naive PT are very dark.

Is this expected? The one with next event estimation produces brighter images like in my previous question which was related to MIS

I thought that a Naive PT would produce images with the same brightness but more noise and it would take longer to converge. The main reason for dark images is because I'm averaging the colors obtained from the previous run with the next run. On each run I shoot 1 sample per pixel.

This means if I take 20 samples (20 passes) and only 2 samples ($S_1$ and $S_2$) are useful (hit the light in the end) and 18 return 0 since they don't hit the light, I'd be doing,

$\displaystyle{\frac{S_1 + S_2 + 0*18}{20}}$

The brighter colors will surely get averaged out. Moreover they won't converge or in other words they won't get brighter since the number of useless samples surely outweigh the number of useful samples. However in every implementation I could find over the net, people averaged it simply like I did. So is that actually a problem or is it expected?

I stumbled upon this question which is pretty much the same thing OP is asking but in the end they find problem with the raytracer and not the path tracer. I don't understand.

If you guys are interested in the code, here is my kernel. The main functions to look for are shading() which contains the whole bouncing around portion. EvaluateBRDF simply evaluates the blinn phong model. The averaging out is done in the main kernel evaluatePixelRadiance in the end.

UPDATE:- So I corrected what Stefan pointed out and removed color clamping. I was using an 8bit depth default framebuffer and RBO of 32 bit floating points. So I guess the data gets clamped automatically when blitting to the default framebuffer. And it seems to have solved the issue. I'll update the results shortly.

enter image description here

So I just thought of comparing the results from a Naive Path tracer and one using Next Event Estimation aka Explicit Direct Light Sampling. However the results from the Naive PT are very dark.

Is this expected? The one with next event estimation produces brighter images like in my previous question which was related to MIS

I thought that a Naive PT would produce images with the same brightness but more noise and it would take longer to converge. The main reason for dark images is because I'm averaging the colors obtained from the previous run with the next run. On each run I shoot 1 sample per pixel.

This means if I take 20 samples (20 passes) and only 2 samples ($S_1$ and $S_2$) are useful (hit the light in the end) and 18 return 0 since they don't hit the light, I'd be doing,

$\displaystyle{\frac{S_1 + S_2 + 0*18}{20}}$

The brighter colors will surely get averaged out. Moreover they won't converge or in other words they won't get brighter since the number of useless samples surely outweigh the number of useful samples. However in every implementation I could find over the net, people averaged it simply like I did. So is that actually a problem or is it expected?

I stumbled upon this question which is pretty much the same thing OP is asking but in the end they find problem with the raytracer and not the path tracer. I don't understand.

If you guys are interested in the code, here is my kernel. The main functions to look for are shading() which contains the whole bouncing around portion. EvaluateBRDF simply evaluates the blinn phong model. The averaging out is done in the main kernel evaluatePixelRadiance in the end.

enter image description here

So I just thought of comparing the results from a Naive Path tracer and one using Next Event Estimation aka Explicit Direct Light Sampling. However the results from the Naive PT are very dark.

Is this expected? The one with next event estimation produces brighter images like in my previous question which was related to MIS

I thought that a Naive PT would produce images with the same brightness but more noise and it would take longer to converge. The main reason for dark images is because I'm averaging the colors obtained from the previous run with the next run. On each run I shoot 1 sample per pixel.

This means if I take 20 samples (20 passes) and only 2 samples ($S_1$ and $S_2$) are useful (hit the light in the end) and 18 return 0 since they don't hit the light, I'd be doing,

$\displaystyle{\frac{S_1 + S_2 + 0*18}{20}}$

The brighter colors will surely get averaged out. Moreover they won't converge or in other words they won't get brighter since the number of useless samples surely outweigh the number of useful samples. However in every implementation I could find over the net, people averaged it simply like I did. So is that actually a problem or is it expected?

I stumbled upon this question which is pretty much the same thing OP is asking but in the end they find problem with the raytracer and not the path tracer. I don't understand.

If you guys are interested in the code, here is my kernel. The main functions to look for are shading() which contains the whole bouncing around portion. EvaluateBRDF simply evaluates the blinn phong model. The averaging out is done in the main kernel evaluatePixelRadiance in the end.

UPDATE:- So I corrected what Stefan pointed out and removed color clamping. I was using an 8bit depth default framebuffer and RBO of 32 bit floating points. So I guess the data gets clamped automatically when blitting to the default framebuffer. And it seems to have solved the issue. I'll update the results shortly.

Typos
Source Link

enter image description here

So I just thought of comparing the resutlsresults from a Naive Path tracer and one using Next Event Estimation aka Explicit Direct Light Sampling. However the resutlsresults from the Naive PT are very dark.

Is this expected? The one with next event estimation produces brighter images like in my previous question which was related to MIS

I thought that a Naive PT would produce images with the same brightness but more noise and it would take longer to converge. The main reason for dark images is because I'm averaging the colors obtained from the previous run with the next run. On each run I shoot 1 sample per pixel.

This means if I take 20 samples (20 passes) and only 2 samples ($S_1$ and $S_2$) are useful (hit the light in the end) and 18 return 0 since they don't hit the light, I'd be doing,

$\displaystyle{\frac{S_1 + S_2 + 0*18}{20}}$

The brighter colors will surely get averaged out. Moreover they won't converge or in other words they won't get brighter since the number of useless samples surely outweigh the number of useufuluseful samples. However in every implementation I could find over the net, people averaged it simply like I did. So is that actually a problem or is it expected?

I stumbled upon this question which is pretty much the same thing OP is asking but in the end they find problem with the raytracer and not the path tracer. I don't understand.

If you guys are interested in the code, here is my kernel. The main functions to look for are shading() which contains the whole bouncing around portion. EvaluateBRDF simply evaluates the blinn phong model. The averaging out is done in the main kernel evaluatePixelRadiance in the end.

enter image description here

So I just thought of comparing the resutls from a Naive Path tracer and one using Next Event Estimation aka Explicit Direct Light Sampling. However the resutls from the Naive PT are very dark.

Is this expected? The one with next event estimation produces brighter images like in my previous question which was related to MIS

I thought that a Naive PT would produce images with the same brightness but more noise and it would take longer to converge. The main reason for dark images is because I'm averaging the colors obtained from the previous run with the next run. On each run I shoot 1 sample per pixel.

This means if I take 20 samples (20 passes) and only 2 samples ($S_1$ and $S_2$) are useful (hit the light in the end) and 18 return 0 since they don't hit the light, I'd be doing,

$\displaystyle{\frac{S_1 + S_2 + 0*18}{20}}$

The brighter colors will surely get averaged out. Moreover they won't converge or in other words they won't get brighter since the number of useless samples surely outweigh the number of useuful samples. However in every implementation I could find over the net, people averaged it simply like I did. So is that actually a problem or is it expected?

I stumbled upon this question which is pretty much the same thing OP is asking but in the end they find problem with the raytracer and not the path tracer. I don't understand.

If you guys are interested in the code, here is my kernel. The main functions to look for are shading() which contains the whole bouncing around portion. EvaluateBRDF simply evaluates the blinn phong model. The averaging out is done in the main kernel evaluatePixelRadiance in the end.

enter image description here

So I just thought of comparing the results from a Naive Path tracer and one using Next Event Estimation aka Explicit Direct Light Sampling. However the results from the Naive PT are very dark.

Is this expected? The one with next event estimation produces brighter images like in my previous question which was related to MIS

I thought that a Naive PT would produce images with the same brightness but more noise and it would take longer to converge. The main reason for dark images is because I'm averaging the colors obtained from the previous run with the next run. On each run I shoot 1 sample per pixel.

This means if I take 20 samples (20 passes) and only 2 samples ($S_1$ and $S_2$) are useful (hit the light in the end) and 18 return 0 since they don't hit the light, I'd be doing,

$\displaystyle{\frac{S_1 + S_2 + 0*18}{20}}$

The brighter colors will surely get averaged out. Moreover they won't converge or in other words they won't get brighter since the number of useless samples surely outweigh the number of useful samples. However in every implementation I could find over the net, people averaged it simply like I did. So is that actually a problem or is it expected?

I stumbled upon this question which is pretty much the same thing OP is asking but in the end they find problem with the raytracer and not the path tracer. I don't understand.

If you guys are interested in the code, here is my kernel. The main functions to look for are shading() which contains the whole bouncing around portion. EvaluateBRDF simply evaluates the blinn phong model. The averaging out is done in the main kernel evaluatePixelRadiance in the end.

deleted 1 character in body
Source Link
gallickgunner
  • 2.6k
  • 1
  • 14
  • 36
Loading
Source Link
gallickgunner
  • 2.6k
  • 1
  • 14
  • 36
Loading