I am trying to make my camera representation work for images whose aspect ratio isn't 1 (640x480, 1920x1080...), but I am having some trouble getting it to work.
The camera model is pretty simple, as it's a pinhole camera with (F)orward, (L)eft and (U)p vectors, and a point in space which serves as its (O)rigin. F points towards the center of the image plane, and U and L are perpendicular with each other and with F, forming the camera's coordinate space.
The way it works is that I define everything except the L vector, which is the normalized cross product of F and U multiplied by the aspect ratio, so that its length is "as wide" as the image is going to be compared to its height.
Then, to get a ray pointing at (i, j), I just create a ray starting at O and pointing at the top-left corner (U + F + L)
The code is this:
class Camera { public: Direction L, U, F; Point O; size_t width, height; size_t rays_per_pixel; private: // For randomizing ray's directions inside the pixel's square static std::mt19937 gen; static std::uniform_real_distribution<double> pixel_distr; public: Camera(Point _O, Direction _U, Direction _F, size_t _width, size_t _height, size_t _rays_per_pixel) : U(_U), F(_F), O(_O), width(_width), height(_height), rays_per_pixel(_rays_per_pixel) { // L perpendicular to F and U, then multiplied with the aspect ratio double aspect_ratio = ((double) _width / (double) _height); // * between vectors is cross product, // .v just addresses their internal vector class L = (F.v * U.v).normalize() * aspect_ratio; } // Return a ray pointing from O to a pixel in the image, with a small // random variation across the pixel's area [[nodiscard]] Ray get_ray(size_t _i, size_t _j) const { // Puts the ray in the pixel's center, then adds to it a random value between [0, 0.5) double i = (double) _i + 0.5 + pixel_distr(gen); double j = (double) _j + 0.5 + pixel_distr(gen); return { O, // Origin Direction((U.v + F.v + L.v // Top left corner - ((2*L.v.modulus()*L.v)/((double) width) * j) // Right advance (as a substraction) - ((2*U.v.modulus()*U.v)/((double) height) * i))) // Downwards advance (as a substraction) }; // } If I indicate width and height with an aspect ratio of 1, I get this, with 512x512 as an example (ignore the texture, I know)
:
If I indicate width and height with an aspect ratio of 2 or whatever else, like 640x480 I get this, which at least is somewhat hilarious:
The vectors used are:
Point O(0, 0, -3.5);
Direction U(0, 1, 0);
Direction F(0, 0, 3);
And L in the first case is Direction L(-1, 0, 0) whereas in the second case is L(-2, 0, 0).
It seems to repeat the image vertically with a pattern, and I made sure the logic error shouldn't be outside of this (I ask for every single pixel across the plane correctly, etc.)
Is something else needed to get a "widescreen" image? Should I change the camera model?
Edit:
Outside of the camera stuff, a rendering job writes to the image like this, which itself is a vector representing the flattened matrix of dimensions [height][width]:
void rendering_job(const Scene &scene, std::vector<Vector3d> &img, size_t i, size_t j) { Vector3d temp_emission; size_t number_of_bounces = 0; for (size_t r = 0; r < scene.camera.rays_per_pixel; r++) { Ray ray = scene.camera.get_ray(i, j); temp_emission = temp_emission + integrator_sample(scene, ray, number_of_bounces); } // Printing every single index access shows that // every single pixel gets addressed (from 0 to // scene.camera.height*scene.camera.width-1) img[i*scene.camera.height + j] = temp_emission / scene.camera.rays_per_pixel; } It then gets passed to a module which writes the PPM image like this (the factor and MAX stuff is for doing tonemapping):
void write(std::string nombFich) const { double factor = color_resolution / max; std::ofstream outfile(nombFich); outfile << std::fixed << std::setprecision(0); outfile << "P3" << std::endl; outfile << "# " << nombFich << std::endl; outfile << "#MAX=" << max << std::endl; outfile << width << " " << height << std::endl; outfile << color_resolution << std::endl; int i = 0; for (auto v : img) { outfile << v[0]*factor << " " << v[1]*factor << " " << v[2]*factor << " "; i++; if (i == width - 1) {i = 0; outfile << std::endl;} } } The writing process writes correctly height rows of width values in the PPM.
Edit2:
DAMN...It was so obvious I couldn't see the forest for the trees. I was indexing the image as img[i*scene.camera.height + j] instead of img[i*scene.camera.width + j]... 
I assume that now I should tweak my camera in order to get rid of the stretching?
