When I was studying a code base of path tracing implementation (done in GLSL), I notice the author returned a pdf of generated camera ray:
Ray rayGen(in vec2 uv, out float pdf) { // ... Ray ray; ray.origin = camPos; ray.direction = normalize(pinholePos - sensorPos); pdf = 1.0 / pow(dot(ray.direction, camForward), 3.0); return ray; } I understand the 1/cosine^3 of the pdf is from the conversion from differential area to differential solid angle by reading these parts of pbrt - 4.2.3 Integrals over Area and 16.1.1 Sampling Cameras.
But I am confused why the end computed radiance has to divide this pdf for this rather simple path tracing implementation (never seen this in many other implementations):
void main() { // ... float pdf; Ray ray = rayGen(uv, pdf); float cos_term = dot(camForward, ray.direction); // accumulate sampled color on accumTexture vec3 radiance = computeRadiance(ray) / pdf; color = texture(accumTexture, texCoord).xyz + radiance * cos_term; // ... } *Lastly I also do not understand above why the radiance that keeps adding into the final color (for progressive rendering, and later will divide sample count) has to multiply a cosine term.
Any insights on these will be appreciated!