INTERACT FORUM

Please login or register.

Login with username, password and session length
Advanced search  
Pages: 1 [2]   Go Down

Author Topic: NEW: Present Queue for JRVR Video Renderer  (Read 8921 times)

DocCharky

  • Junior Woodchuck
  • **
  • Posts: 54
Re: NEW: Present Queue for JRVR Video Renderer
« Reply #50 on: September 14, 2024, 11:19:47 am »

I've not experienced this myself at all, but if it only happens with mismatched content maybe something is up, although I wouldnt know what right now.
Logs may help.
I use frame rate matching, so I'm not sure the issue is related to mismatched content.
Logged
Charky

"Rule #1 : If it works, don't change anything."

mykillk

  • Regular Member
  • World Citizen
  • ***
  • Posts: 238
Re: NEW: Present Queue for JRVR Video Renderer
« Reply #51 on: September 22, 2024, 12:17:12 am »

I'm also getting regular frame drops using the new present queue mode. The bigger the queue, the more drops I get. I'm running NVIDIA driver 560.94 on a RTX 4080, Windows 11 23H2. Monitor is running at 119.88 hz.

The attached capture from the MSI Afterburner graph window may be helpful. It shows a comparison of using Presentation Queue On (15 frames) vs. Presentation Queue Off.

Some takeaways that I noticed:
  • Core clock is running about 3x higher with the Queue On and is a bit spikey. With Queue Off it's flatlined to the lowest possible clock rate of 210 Mhz.
  • GPU usage is lower with Queue Off
  • Frametime is erratic with Queue On. The Frametime with Queue Off is consistent with the source frame rate (23.976 fps)


Something else I noticed is that there seems to be a high correlation between drops in the Decode Queue (i.e. going to 5-6) and frame drops. My experience with MadVR was that having a slightly larger CPU Decode buffer than the GPU Render buffer helped cut out frame drops. By having the Decode and Render buffer sizes the same, a sequential bottleneck is introduced (i.e. trying to both decode and then render a frame within the same frametime budget). By having the CPU decode buffer even one frame larger, there's no longer that bottleneck because the CPU and GPU are essentially processing different frames in parallel. It also adds one frametime length for the pipeline to "catch up" when the Decode buffer falls behind.

I've also attached a screenshot of some settings in MadVR that might be helpful, related to GPU flushing.
Logged
Pages: 1 [2]   Go Up