After reading datenwolf's 2011 answer concerning tile-based render setup in OpenGL, I attempted to implement his solution. The source image looks like this (at 800 x 600)

The resulting image with 2x2 tiles, each tile at 800 x 600 per tile looks like this.

As you can see they don't exactly match, though I can see something vaguely interesting has happened. I'm sure I've made an elementary error somewhere but I can't quite see it.
I'm doing 4 passes where:
w, h are 2,2 (2x2 tiles) x, y are (0,0) (1,0) (0,1) and (1,1) in each of the 4 passes MyFov is 1.30899692 (75 degrees) MyWindowWidth, MyWindowHeight are 800, 600 MyNearPlane, MyFarPlane are 0.1, 200.0 The algorithm to calculate the frustum for each tile is:
auto aspect = static_cast<float>(MyWindowWidth) / static_cast<float>(MyWindowHeight); auto right = -0.5f * Math::Tan(MyFov) * MyShaderData.Camera_NearPlane; auto left = -right; auto top = aspect * right; auto bottom = -top; auto shift_X = (right - left) / static_cast<float>(w); auto shift_Y = (top - bottom) / static_cast<float>(h); auto frustum = Math::Frustum(left + shift_X * static_cast<float>(x), left + shift_X * static_cast<float>(x + 1), bottom + shift_Y * static_cast<float>(y), bottom + shift_Y * static_cast<float>(y + 1), MyShaderData.Camera_NearPlane, MyShaderData.Camera_FarPlane); , where Math::Frustum is:
template<class T> Matrix4x4<T> Frustum(T left, T right, T bottom, T top, T nearPlane, T farPlane) { Matrix4x4<T> r(InitialiseAs::InitialiseZero); r.m11 = (static_cast<T>(2) * nearPlane) / (right - left); r.m22 = (static_cast<T>(2) * nearPlane) / (top - bottom); r.m31 = (right + left) / (right - left); r.m32 = (top + bottom) / (top - bottom); r.m33 = -(farPlane + nearPlane) / (farPlane - nearPlane); r.m34 = static_cast<T>(-1); r.m43 = -(static_cast<T>(2) * farPlane * nearPlane) / (farPlane - nearPlane); return r; } For completeness, my Matrx4x4 layout is:
struct { T m11, m12, m13, m14; T m21, m22, m23, m24; T m31, m32, m33, m34; T m41, m42, m43, m44; }; Can anyone spot my error?
Edit:
So derhass explained it to me - a much easier way of doing things is to simply scale and translate the projection matrix. For testing I modified my translation matrix, scaled up by 2x, as follows (changing translate for each tile):
auto scale = Math::Scale(2.f, 2.f, 1.f); auto translate = Math::Translate(0.5f, 0.5f, 0.f); auto projection = Math::Perspective(MyFov, static_cast<float>(MyWindowWidth) / static_cast<float>(MyWindowHeight), MyShaderData.Camera_NearPlane, MyShaderData.Camera_FarPlane); MyShaderData.Camera_Projection = scale * translate * projection; The resulting image is below (stitching 4 together) - the discontinuities in the image are caused by the post processing I think, so that's another issue I might have to deal with at some point.

P, you can simply useP' = S(2,2,1) * T(0.5, -0.5, 0) * P(withSbeing a scale matrix andTa translation) as a replacement forP(no matter what kind of projection P is, it will work in any case). All of this assumes you use GL's usual matrix conventions, so that the transformation of a vertexvisv'=M * v. If you usev' = M *v, post-multiplying the modifications is the correct way.