Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

4
  • 1
    $\begingroup$ So my question is how do we do this on modern GPUs? I don't think there's any way to query the scanout, and it seems to me you can't really submit per-scanline draw calls. Even if you could -- what guarantees do you have that your draws are going to get there before the scanout? $\endgroup$ Commented Aug 5, 2015 at 16:10
  • 1
    $\begingroup$ @Mokosha Correct, there's no way to query the scanout directly AFAIK. At best, you can figure out when vsync is (via some OS signal) and estimate where the scanout is by timing relative to that (knowing details of the video mode). For rendering, you can experiment to find out how long it usually takes between glFlush and when the rendering is done, and make some guesses based on that. Ultimately, you have to build in some slack in your timing in case of error (e.g. stay 2-3 ms ahead of scanout), and accept that there will be probably be occasional artifacts. $\endgroup$ Commented Aug 5, 2015 at 20:39
  • $\begingroup$ The effect of increased latency is due to vsync, which causes the front and backbuffer swaps to synchronize with the vblank of the monitor. Double buffering itself doesn't cause this issue by itself and it is useful to minimize flickering because a pixel can only change once in the front buffer. $\endgroup$ Commented Aug 6, 2015 at 0:29
  • $\begingroup$ I've come up with an accurate way to predict rasters without a scan line query, see answer below. $\endgroup$ Commented Mar 27, 2018 at 14:24