I have a simple (and maybe quite naive) question regarding dual GPU use for Virtual Reality (VR); it nagged me for years now and I couldn't figure out why this can't work, at least in principle:
I realize that in alternating frame rendering techniques (like SLI and Crossfire used to do) the efficiency is quite meager because of synchronization issues etc. - and I am aware of Nvidia's SLI-VR project (that probably is dead meanwhile).
But shouldn't the use of 2 GPUs, one for the left eye and the other one for the right eye, working completely independently of each other, be a trivial method to double the frame rate, i.e. grant an "SLI-efficiency" of 100%? I've seen the flow-diagrams for SLI-VR and noticed that there's synchronization again; but why is that necessary for VR? The GPUs wouldn't need to work together on alternating frames (which requires synchronization), they would in fact not need to know anything about each other at all.
Couldn't we just do the following?: Regularly, a game engine sends out the job to render a scene for a certain camera position to the GPU, right? So couldn't we just have the game engine send out an additional job for the camera position "plus 60 mm to the right" (or whatever interpupillary distance one desires) to a second GPU that doesn't need to know anything about what the first GPU does? And whenever one of the GPUs has finished rendering its frame it would send it out to "its eye", call for the next camera position and render its next frame, and so on.
The images would not arrive synchronized in the VR device, but that wouldn't be necessary. The left and the right eye pieces could just be treated as two completely independent monitors, couldn't they?
I'd be very interested to learn where I went wrong with this idea - I think I did, otherwise the problem would long have been solved.