Hi,
I've been running JRiver for a while now, previously on an HTPC with a 1070Ti graphics card. I use MadVr as my renderer and I've had no issues, even with everything turned up to the max with my 1080 display. My source is a NAS that's connected with a wired connection.
I recently upgraded to a 2070 Super and since I've done that (might be coincidence, but it seems like they're connected) I've been getting the odd stutter in 4k video. Looking at the madvr stats while it happens it doesn't seem to align with the dropped/repeated frame count with the mismatched display timings (my display reports 23.977, not a perfect match) and the count of dropped frames, duplicated frames, and glitches doesn't increase when it happens. My render times are down in the <20ms range.
I did some debugging and it doesn't happen in MPC-BE with the same MadVR settings.
I've been able to pin it down to the Optimise Hardware For Decoding Performance setting in Video Options. Turning that option on removes the issue and turning it off re-introduces it consistently.
I understand that this option changes between DXVA Copyback and native methods, is that correct? If so, can anyone explain why a more powerful card might suffer and need changing from/to copyback/native?
I'm going to check again in MPC-BE and see if I can recreate the issue using one or other of those methods and see if I can resolve it by switching. I can also swap the GPUs back, but I'd rather not.
I'll also do some testing and see if I can see any difference between the option enabled/disabled. If I can't then I guess it doesn't matter, but I'm curious as to why this is happening.
Thanks for any help anyone can offer.