So I spent most of the afternoon working this one and found a (admittedly bizarre) solution, I thought I'd post it in case this rings any bells for the JRiver devs or anyone else is experiencing a similar issue.
So I have an MC server that has the three DLNA functions ticked (server, renderer, and controller), but I also have several client PCs that also had DLNA controller functionality enabled so that I could control other MC clients from a central location (tremote only works from client to server, but not client to client).
The solution to my problem was (strangely) to turn off the DLNA controller option in JRiver on every box other than the server. After spending some time looking at the media network log during playback, I noticed that the black screen/connection loss seemed to coincide with a log event from one of my JRiver client PCs with DLNA controller enabled (usually an M-search). It didn't happen every time, but the black screens seemed to always coincide with one of those log events. So, on a hunch, I disabled DLNA controller on the non-server boxes. The problem immediately vanished, and I managed to play two movies back to back all the way through.
So it looks like having more than one MC instance running as a DLNA controller was confusing/interrupting the renderer somehow? I know next to nothing about UPNP or DLNA, so I'm sure that's not the technical term for it, but disabling all but one JRiver DLNA controller made the problem vanish, and reenabling makes the problem return.
Interestingly, some other upnp/DLNA devices in my house are now working better too. I had been experiencing similar occasional playback interruptions on a Raspberry Pi-based UPNP audio renderer (using MPD and upmpdcli) that now seem to be gone as well. I had just chalked those up to wifi flakiness (and or RPi flakiness), but I've managed to play several albums back to back on the Pi without an interruption which is unusual.
It seems that, at least with my silly SoC homemade renderers, two JRiver instances working as DLNA controllers on the same network was one too many. If the devs are interested in running this down, I'm pretty sure I can reproduce the issues, but I'm just happy to have found a fix.