@mykillk: my sincere apology for this very late reply, been busy but procrastinating as well unfortunately.
hhhhmmm, my monitor is 15.6" (39.6cm) HD (1366x768) WLED display with TrueLife™, so playing a 1080p file on my XPS L502x would have to be downscaled.
now let me explain a bit of how i watch videos on my monitor. the only type of video that utilizes the whole screen is 1080p files. (downscaled obviously)
the rest of the videos (1280x720/1280x544/720x576[DVD PAL]/720x480[DVD NTSC]) are played in their NATIVE resolution. IMO, i don't see the point in adding extra pixels to those videos (ie. upscaling them to 1366x768) as i'm looking for the highest quality possible. so really, with these type of videos, the only thing i'm worried about is the chroma upscaling.
hhhmmm, u seem to undermine the full potential of the 1GB NVIDIA® GeForce® GT 525M graphics with Optimus (with latest drivers).
with the videos that i play in their NATIVE resolution, i am able to use Jinc 3 with AR for the chroma upscaling, with NO frames being dropped or delayed.
playback seems to be smooth. btw, this is with the CPU queue level at 32 and 4 for GPU.
u stated: "Essentially, they should stay mostly full. If they aren't, and you're getting dropped frames, increasing the CPU/GPU queues will only slightly delay the onset"
- i think u made a mistake there, when one is getting DROPPED/DELAYED frames, it is best to DECREASE the GPU queue. this is one of the reasons i am only using 4 for the GPU queue. any higher & Jinc 3 for chroma upscaling would start dropping frames.
looking at the stats after pressing "ctr-J", what rendering times indicate the GPU is handling a video fine? i read somewhere that one of the figures should b under 5ms, and that's how u would know everything is smooth.
btw, my display rate is 59.974Hz & i notice that the composition rate of most of my video files is 60Hz. is it normal for the composition rate to be higher than the display rate?
with 1080p files, however, image downscaling comes into play, and i use lanczos 3 with AR for chroma upscaling - & lanczos 3 with AR for image downscaling. i haven't tried "scale in linear light" as yet... btw, what is this? what purpose does it serve?
u mentioned: "get the best performance by continuing to allow the CPU to do the video decoding, which is more than fast enough."
please tell me, what process do i do to make sure that ALL of the decoding is done by the CPU & that none of it is being done by the GPU?
what settings do i need to change/alter in JRiver or MadVR?
thanks a lot, mykillk!