Variable Refresh Monitors and Animation

Standard

So variable refresh rates have a lot of hype surrounding them now, with representatives from GPU and monitor vendors are saying that it will just magically make existing games better with no effort required on the part of the developer. This is bullshit and you shouldn’t believe them. If you’re a developer you need to be careful to handle these new refresh rates and if  you’re a customer you should consider just locking your monitor to 60Hz for games that don’t do the right thing.

The problem

The issue is that games do animation. So in order to compute a frame for display it needs to essentially predict when that frame will be presented, and then compute the positions of all the objects at that time. This is only partially true because there’s lag in the display pipeline that we don’t know anything about, but since that lag affects every frame the same you can typically just ignore it and just advance the game state by 1/60Hz each frame and your animation will look smooth.

However, if the update frequency is variable then the amount of time it takes to render and display a frame changes every frame. So when we’re computing the next frame we don’t know how far ahead to “step”. It may have been 16ms last frame, but this frame it may be 15 or 17 – we just can’t tell in advance because we don’t know how long it will take the CPU and GPU to finish the frame. This means that objects no longer move smoothly in the world, they will move in a jerky fashion because the amount they get advanced each frame isn’t consistent with how long it took to actually display the frame.

Solutions?

So the first solution here is to just lock your framerate at some fixed number. 60Hz is a decent option, but if you have variable refresh rates maybe you’ll go for something odd like 45Hz or whatever. In fact, I would be surprised if the sweet spot for smoothness/performance just happens to be 60Hz by sheer coincidence. However, this is troublesome because our graphics APIs don’t give us an easy way to do this right now – perhaps these fancy new monitors could just let us set the refresh rate at arbitrary intervals instead and we can go back to using vsync, perhaps with some kind of clever fallback (if you miss the vsync, it uses variable refresh to present the frame as quickly as possible and then “resets” the framerate at that point, so that the next “vsync” happens 1/refresh later).

However, picking the refresh rate up front is tricky, even if we can pick any number we want. The game is going to run better on some scenes than others, and it will also vary from system to system. Maybe this could be automatic? The game could render each frame with a “fixed” target in mind, this target is set in such a way that we’re very confident we can hit it, and all the physics and animation is updated by that amount of time. If we can simultaneously tune this target based on how long it typically takes to update the frames, we’ll end up running faster where we can, and slower where we need to. So the idea is this:

  1. Render the frame.
  2. If the current frame time is less than the target frame time, do a busy wait then present just at the right time (this should be the overwhelming majority of frames).
  3. If the current frame time is more than the target frame time, present immediately and increase the target time.
  4. If the frame time was significantly less than the target frame time, and has been for some time, slightly decay the target frame time so that we get better fluidity.

There are many ways you could tweak this. For example, maybe you pick a target frame time based on a histogram of the most recent 100 frame times, or something – picking a time so that 99 of them would’ve come in under budget. Or maybe you fit a normal distribution to it and pick the 99th percentile target frame time.

Or maybe something simpler could work. For example, we could make sure our target frame rate is always 10% higher than the longest frame in the last second (or something). This means that as our actual frame times drop the target frame time will drop too (after a second) and we get better fluidity. If our frame times creep up slightly we will pre-emptively bump up our target to 10% on top of that to give ourselves breathing room in case they keep going up. If we do end up with a frame spike that’s worse than 10% of our worst frame in the last second, we will immediately bump up the target frame time to 10% above that new worst case. You would tune that percentage based on your game – if your frame times vary slowly you could use a lower buffer percentage, and if it varies quickly you may need a larger buffer percentage.

The biggest problem with this issue is step 2 above. The busy wait. In a typical renderer this means we can’t actually start the doing rendering for the next frame because we have to baby-sit the present for the previous frame, to make sure it doesn’t fire too early. This wastes valuable CPU cycles and GPU/CPU parallelism. Maybe in DX12 we could have a dedicated “presenter thread” that just submits command buffers and then does the busy-wait to present, while the main render thread actually does the rendering into those command buffers. Really what we want here is a way to tell the OS to sync to an arbitrary time period. For example, if the last couple of frames have all taken around 20 ms (which we can tell by measuring both the CPU time, well as inspecting some GPU profiling queries a frame later to see how long the GPU took) we could call some API to set the vsync period to be 22 ms just to give us some buffer and avoid jittering. This way the OS could potentially use a HW timer to avoid eating up CPU cycles.

Either way, let’s at least stop pretending that variable refresh rates are magic that can just be enabled for any game and look good. The developer needs to know when these things are enabled and take steps to make sure animation and physics remains smooth. It would be ideal if future APIs could actually help us out in doing this, but either way we need to do something to handle this.

Advertisements

4 thoughts on “Variable Refresh Monitors and Animation

  1. I thought that the motivation behind variable rate monitors was basically this: a game is targeting 60 Hz, and frames render comfortably under 15ms in the common case. But occasionally the scene pushes the budget just a bit — say, when there’s a big battle onscreen, frames occasionally take 18ms to render. With a non-variable-rate monitor, the previous frame’s image will persist for two ticks instead of one, and the result will be significant visual stutter. With a variable rate monitor, instead of requiring frames to persist for integer multiples of 16.7 ms, frames render whenever they’re ready — 10ms, 18 ms, whatever.

    The engine-side issues of stabilizing simulation/animation in the face of varying per-frame work seems orthogonal to the display-side question of whether images persist for integral multiples of a fixed monitor refresh rate.

    I agree that an adaptive engine will get smoother results than an unmodified engine, but don’t understand why you think an unmodified engine doesn’t benefit at all. Please help resolve my confusion! 🙂

    • An unmodified engine with gsync will get inconsistent frame rates, and thus janky animations, 100% of the time instead of just when they drop a frame (although the jankiness may be less – assuming they use the previous frame time to advance the next frame and it doesn’t vary too wildly between frames). This may be better overall if your game is consistent enough, but it would be even better to lock it to 60Hz and just tune down the graphics settings until you don’t drop any frames ever.

  2. I have only programmed small games, but where the CPU was bottleneck (because of physics-related computations). At first what I did was to plan the next frame so as to compensate for time spent on computing the previous frame. This caused hiccups in animations corresponding to when the garbage collector kicked in, so instead I started using a sampling of the average time to compute a frame as the basis for advancing the simulation time at each frame. This is somewhat related to the problem you’re talking about.
    So, isn’t it the way it is usually done in real-time game programming? Why would anything work differently in the context of dynamic refresh rates? I guess the Nvidia controller will by itself pick a frequency that includes a margin of variability in the rendering latencies… don’t you think?

    • Usually games just make sure they don’t ever drop any frames (one reason that GCs are infrequently used, and when they are you try Very Hard to make sure your pauses can be accommodated within a frame, rather than causing drops). This way the frame rate is always consistent, and you can perfectly predict how much you need to animate each frame. As soon as your frame times are inconsistent you don’t know how much you need to animate things. E.g. an object that’s moving at 1 m/s should advance 1/60 m per frame if you have a fixed frame rate of 60Hz, but with variable refresh rates the you just don’t know how long the current frame will take to complete when you’re updating your animations – maybe you should move 1/60 m or 1/20 m, or 1/80 m, you just don’t know ahead of time how much to move things because the final frame time is unknown when you’re doing the update.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s