More > JRiver Media Center 31 for Windows

NEW: VST Latency Compensation

<< < (5/5)

eve:

--- Quote from: mumford on October 05, 2023, 06:28:26 pm ---Thanks for the link.  This may actually work, a server treating video and audio as separate AES67 streams and the final client just compare timestamps and sync, or do, what are required.

--- End quote ---

You wouldn't even need to actually 'send' the video. Again, the way things work in what I've set up is essentially the playback application syncs itself to the PTP clock on your network. Your audio gets decoded from the video, and instead of sending that audio to a 'device' you push packets containing that audio directly into the network.

Now if you wanted to get really complex you could do something with your DSP node where it sends back a message to the initial playback application saying 'hey, my dsp pipeline takes this long to complete' and offset the video by that amount in the playback application. I'm just spit balling here but essentially, sync the playback pipeline to ptp, playback starts, but it doesn't, instead you send empty audio samples (same channel count + format) to your DSP for say, half a second. Your DSP node tells the playback application 'this took me 3ms from the packet receiving to completion of my dsp pipeline' , then the playback application accounts for said delay, and the actual video + audio stream starts. You could hypothetically cascade this too, if your DSP node sends OUT an AES67 stream, whatever picks that up could also tell the playback application 'look, it takes me 5ms from getting a packet to that sample hitting D/A' and add that into the offset I guess?
Really though, +/- 1ms, 2ms for video sync, it's imperceptible in many respects.




Navigation

[0] Message Index

[*] Previous page

Go to full version