
How then do In-game frame limiters push input lag/latency down even further than this? Or, said another way - how can we get lower than 0 Pre-rendered frames with the in-game limiter? I'm left with just one remaining question here then and it's regarding Rivatuner VS in-game frame limiters and the latency disparity there.Īlright, so for example - we have our Max pre-rendered frames set to 1 in the Nvidia Control Panel and we use RTSS to limit the framerate resulting in the minimum latency at that target framerate while we're hitting it - effectively we have 0 prerendered frames as stated/explained above.

you for your replies, that's very helpful. If the FPS cap is not reached, then the frame limiter won't activate anyway, and you have normal asynchronous frame preparation as usual. But remember that if the FPS cap is reached, you're there already, you don't need more FPS. You would think that this would impact performance in a negative way. It will only be prepared when the GPU is actually ready to render it immediately afterwards. With a frame limiter this won't happen, since there's not even one pre-rendered frame prepared in advance. So that 1 pre-rendered frame will then actually have an impact on input lag. Unless the GPU is fully maxed out, in which case the CPU doing stuff in advance becomes a bad thing, because that stuff it did will become old (like reading mouse input from the player) as it will have to wait for the GPU.
#REDUCE INPUT LAG H1Z1 DRIVER#
The game can prepare a frame on one thread, the GPU driver can do stuff on another, and stuff happens in parallel. Normally, because we're using at least two CPU cores these days, CPU and GPU are asynchronous. It will just block the game, and thus it won't be able to prepare a frame in advance. Seems to be the proper approach for lowering input latency, but I have no idea why this works and going purely off of what I've read thus far about how the issue is the CPU preparing extra frames, I would think that changing MPF to 1 would do all of the above by itself, but that simply doesn't seem to be the case (I've tested this myself and each step does seem to noticeably help)Ĭlick to expand.When not using a CPU-based frame limiter, that's true. What does capping your framerate achieve that reducing the max prerendered frames setting does not? If MPF is set to 1, what is frame-limiting doing to further reduce latency? (How can it reduce more than the 1 that is already being forced I mean)ġ) Framelimit just beneath refresh rate (59.92 on 59.935 Hz panel) w/ Rivatunerģ) Use double buffered, not triple buffered Vsync to reduce buffered frames/input lag Intuitively I would think setting this to 1 (minimum available) would have the same effect as limiting your framerate or capping just under refresh (59.9 on a 60 Hz panel) for Low Lag Vsync described on Blur Busters since it should force the CPU to stop preparing extra frames. One thing that's confusing me is the "Max Prerendered Frames" setting (Nvidia control panel). Thank you very much, I really appreciate a lot, that's helpful. I don't really get how something like this is possible and I'd really like to know how it works. Is it because the game can measure input closer to the delivery of the frame since it knows exactly when to expect it? My question is in the title - why or rather how does capping your framerate reduce latency over the latency you'd have if you weren't limiting your framerate but were hitting the same frames per second value? This latency reduction also goes a long way towards explaining the low-lag traditional vsync implementations that have been described on Blur Busters that involve capping your framerate to keep buffers empty (as I understand it). That frame limiters can reduce input lag by 1 or 2 frames (1 if it's Rivatuner - 2 if it's an in-game frame limiter) where if you hit 70 fps uncapped your input lag will be greater than if you hit 70 fps capped.
