CPU usage is no higher than 30%.
Not sure how much headroom is left in the GPU, does anyone know of a tool to measure the utilisation?
GPU-Z can give you a VERY ROUGH GPU usage percentage meter. However, because measuring GPU utilization is not simple (at all), you should consider this number to just be a very rough guide. Different GPUs may react differently to the way they measure usage, and the numbers it puts out can't be directly compared to a CPU utilization number. Also, there is really no documentation from the GPU-Z people on exactly what this number is measuring or how it is calculated. It is a mystery black box.
I've explained this in other threads if you want to take a look around.
My understanding is JRiver renders the file via ROHC codecs and with hardware accelartion enabled sends this to the GPU for processing rather than the CPU. Is that correct?
MC uses a set of third-party filters for ROHQ (mostly madVR and LAV, but there are other things too). When you enable GPU Acceleration, it could actually change the set of filters that are used behind the scenes, though in practice, on Nvidia hardware, this basically means it uses the modern-day version of
LAV CUVID (which has now been "rolled into" the normal LAV filters... CUVID is dead, long live CUVID).
It does not use any of the Nvidia decoders. You do not need to install any additional third-party decoders to make use of Red October for normal video playback in MC. Even if you do, MC won't use them in the standard Red October modes. It is totally isolated from the normal filters installed on the system.
I'm really not sure about the Nvidia control panel color settings. I think the default is to "use application settings". It would make sense that if you are overriding this behavior, that you won't be able to use the color controls inside MC. But, I'm really not sure. Someone with more experience with an Nvidia card would have to comment.
Also highlights the model number is not an indicator of performance. i.e. a 9800 GT is more powerful than the GT430.
Absolutely. And you can't even go by "generation" and "product segment". So, the 9800 GT was supposedly a GeForce 9x00 series card by the model number, right? Actually, no... The chip inside is
identical to the 8800GT. They just changed the name. They do this a LOT. Particularly at the low end.
And, a month or so earlier, they'd released the high-end GTX 280 card, which was an order-of-magnitude larger and more powerful than the older 9x00 and 8x00 series cards, but some of them were actually less powerful at launch than the older cards on the older architecture.
Nothing at all, when it comes to GPUs, is simple. There is a LOT of marketing mumbo-jumbo.
Even if you have two 9800 GT cards, they can have wildly different clockspeeds and memory configurations, that can dramatically impact performance. That's why I said above that just because one person had trouble with a GeForce 430 card, doesn't mean that all 430 cards will have trouble... Just THAT one with THAT CPU and THOSE drivers.
That's one of the main reasons that I prefer AMD hardware (in general terms) at the low end. While they still play games with model numbers and rebrand older GPUs, they don't do it as often or as egregiously as Nvidia does at the low end. For anything under $150 (or so) from Nvidia you really, really have to spend time and research the particulars of the card and the GPU on it before you buy, if you want to know what you're getting. That can be true for AMD cards as well, but you usually don't see the problems until you get down to the very-low-tier $50-$60 cards. And, frankly, at those price points you are really just getting something that can display pixels on a monitor. Everything else is gravy, and you can't expect much.
At the $250 midrange price points and above, the situation is much cleaner on both sides, and the model numbers tend to make much more sense.
That said, if a particular product doesn't offer the acceleration support you need with the filters provided... Well then, they aren't a good choice at all. And that's where we are right now with LAV and MC and AMD. Nev just doesn't like AMDs hardware or drivers (he's used to Nvidia), and so development there will likely require waiting for a real unified GPGPU solution.
However... Along those lines,
this article from over the weekend gave me a lot of hope.