Thanks for the heads up on the ASIO buffer setting, so basically ignore the output buffer in jriver, and set the buffer in my device control panel.
Sort of; if you open up the device settings for your ASIO device in JRiver, you'll see an button for "ASIO control panel." That's the relevant buffer setting, but with every ASIO device I've used it just opens up a small version of the device's own control panel. But double check and make sure as that's the setting JRiver is paying attention to for the output buffer.
The buffer for the behringer is only about 5ms which works just fine but presumably as my DSP gets fancier i'll need to add some playback buffer to MC20, I just want to get into my head the order in which this gets processed, I assume the playback buffer takes precedence and then my device buffer after that? Or would I need to set my device buffer to a sum of the playback buffer set in MC20 + it's own minimum latency? Given that I'd only be using MC as the WDM driver and not actually using the rest of the software (save for the DSP) I assume I can just calculate the length of the playback buffer as a function of the amount of taps/sample position, plus a little more to be on the safe side?
If you're using the WDM driver there's an additional source of latency which is JRiver's input buffer. You can set that under Audio-->Options-->Advanced-->live playback latency. That setting plus the output buffer for your ASIO device should be close to your minimum total latency when using the WDM driver. But two caveats:
1) Just because the Behringer is reporting 5ms as the output latency doesn't mean that's actually true (unless that 5ms is an actual measurement you took). Almost every piece of audio equipment I've ever had that self-reported latency has "lied" about the latency, sometimes by quite a bit. It can be as much as double that.
2) If you use latency producing processing (i.e. convolution) it will add additional delay. JRiver doesn't try to "fit" the DSP into the buffer provided, it just takes as much time as it needs for the DSP. In the case of a convolution filter the general formula for latency is (taps / 2) / sampling rate. So a 8800 tap filter at 44.1KHz would be about 100ms of added delay.
So your real-world latency using convolution and the WDM driver will be approximately: JRiver live playback latency + convolution delay + actual output latency. If that total gets much higher than 25 or 30ms you'll likely begin to notice lipsync issues. IME if it gets much above 50ms lipsync is completely blown. The film standard for lipsync is less than 22ms, the TV standard is less than 45ms.
All that's irrelevant for audio, but for web video through the WDM driver it more or less rules out conventional convolution filters with many thousands of taps.
Oh one more thing which maybe I should start a different thread about? Is there anyway in MC20 to change the names of the output channels? I've got my output set to 10 channels and I guess I was expecting those channels to show up with the names they are given in the ASIO control panel for the behringer rather than left,right,centre,sub, etc...Maybe it's a future feature request, I guess there must be a request thread around here somewhere.
It's been discussed before, but there's currently no way to rename them. You can change the order using the "order channels" filter, which can be helpful. The default order of the channels is different in 5.1 and 7.1 because that's the standard. In 5.1 the channels are 0=L, 1=R, 2=C, 3=S, 4=SL, 5=SR. In 7.1 the channels are 0=L, 1=R, 2=C, 3=S, 4=RL, 5=RR, 6=SL, 7=SR.
Thanks everyone again, I'm really enjoying using this flexible software, maybe bits of it are a little counter intuitive interface wise, which I guess is always a function of a piece of software growing beyond it's original intention, that said the amount of stuff it can do and Matt's commitment to bringing all those tools to us users is to be very highly commended.
You'll be amazed what you can do with JRiver if you stick with it.