INTERACT FORUM

Please login or register.

Login with username, password and session length
Advanced search  
Pages: [1]   Go Down

Author Topic: NEW: VST Latency Compensation  (Read 4774 times)

JimH

  • Administrator
  • Citizen of the Universe
  • *****
  • Posts: 71348
  • Where did I put my teeth?
NEW: VST Latency Compensation
« on: June 07, 2023, 03:04:56 pm »

Thanks to user mitchco for suggesting and helping Matt implement it.  Read from the bottom up.  It's in MC31.0.18.

Mitch's site is:  https://accuratesound.ca/

-------------

Way to go Matt! Works like a charm! Thank you!

Initial testing on two computers is all good. Give me a day to try various latencies and I will report back.

Thanks again!

Mitch

From: Mitch Barnett
Sent: Monday, June 5, 2023 8:46 AM
To: 'Matt Ashland' <>; Jim Hillegass <>
Subject: RE: Possible feature request?

Hi Matt,

Awesome!! Thanks so much!

Yes, I am happy to test. Will keep an eye out.

Mitch

From: Matt Ashland
Sent: Monday, June 5, 2023 6:29 AM
To: Mitch Barnett <accuratesound.ca>; Jim Hillegass <>
Subject: Re: Possible feature request?

Hi again Mitch,

Instead of sending me a plugin, I'll just put a build out later today with support.  Hopefully you can test and provide feedback sometime.

Thanks,

-Matt

On 6/4/2023 6:21 PM, Matt Ashland wrote:

Hi Mitch,

Good idea!

I'll hook this up in the coming days.

Do you have a plugin you could provide that returns a non-zero latency?  It would allow me to test a little bit.

Thanks,

-Matt

On 6/2/2023 11:56 PM, Mitch Barnett wrote:

Hi Matt and Jim,
Just wanted to thank you again for all the help getting my plugin to work perfectly with JRiver. I have very happy customers. Including myself as JRiver is my “daily driver.”

While my plugin is a 0ms convolver, I also report the FIR filter latency to hosts. In DAW’s on the pro audio side, the host provides latency compensation based on the plugins reported latency. One example is for audio for video post-production. So if the plugin reports 1000 samples latency, then the host delays the video by 1000 samples so everything syncs up.

There is a VST3 interface for both the plugin side and host side. On the plugin side, I call setLatencySamples() and pass in the number of samples of a FIR filter, typically 32,768 samples for a 65,536 sample linear phase FIR filter.

On the host side, there is an interface called getLatencySamples() The interface is here:

https://steinbergmedia.github.io/vst3_dev_portal/pages/FAQ/Processing.html#q-how-report-to-the-host-that-the-plug-in-latency-has-changed

Specifically: https://steinbergmedia.github.io/vst3_doc/vstinterfaces/classSteinberg_1_1Vst_1_1IAudioProcessor.html#af8884671ccefe68e0a86e72413a0fcf8

The idea is that JRiver would call getLatencySamples() from the plugin and delay the video by the number of samples reported. I believe JRiver’s internal convolution engine already does this or something similar.

Is this a possibility to implement in JRiver MC 31?

I hope you guys have a great weekend!

Kind regards,

Mitch

Logged

stewart_pk

  • Citizen of the Universe
  • *****
  • Posts: 648
Re: NEW: VST Latency Compensation
« Reply #1 on: June 07, 2023, 06:47:11 pm »

Nice.
Logged

saltanar

  • Recent member
  • *
  • Posts: 22
Re: NEW: VST Latency Compensation
« Reply #2 on: June 09, 2023, 01:46:33 am »

is this working also for Dirac plugin?
Logged

mattkhan

  • MC Beta Team
  • Citizen of the Universe
  • *****
  • Posts: 3959
Re: NEW: VST Latency Compensation
« Reply #3 on: June 09, 2023, 02:11:59 am »

is this working also for Dirac plugin?
given the description of the feature, that would be a question to ask Dirac (as the plugin has to report its latency)
Logged

dziemian

  • Recent member
  • *
  • Posts: 33
Re: NEW: VST Latency Compensation
« Reply #4 on: June 09, 2023, 07:42:04 am »

Hopefully, it will work with Dirac plugin as well
Logged

Flak

  • Recent member
  • *
  • Posts: 12
Re: NEW: VST Latency Compensation
« Reply #5 on: June 09, 2023, 11:36:20 am »

Interesting!!

Is it possible to manually enter in JRiver the desired VST latency compensation?
Logged

eve

  • Citizen of the Universe
  • *****
  • Posts: 651
Re: NEW: VST Latency Compensation
« Reply #6 on: June 10, 2023, 12:43:28 pm »

Holy smokes.

Thank you!!!! This is actually a huge deal
Logged

Flak

  • Recent member
  • *
  • Posts: 12
Re: NEW: VST Latency Compensation
« Reply #7 on: June 14, 2023, 09:24:46 am »

I've just got confirmation from our developer of the Dirac plugin that the Dirac Live processor already informs the host of the latency  :)
Logged

mumford

  • Recent member
  • *
  • Posts: 5
Re: NEW: VST Latency Compensation
« Reply #8 on: August 26, 2023, 03:00:09 pm »

Interesting!!

Is it possible to manually enter in JRiver the desired VST latency compensation?

I like to expand this question with a use case scenario.  JRiver plays a movie and uses HDMI audio out to an external Dolby Atmos decoder, such as Arvus H1-D.  Multi-channels of decoded audio will be routed back to the same PC, with JRiver and Hang Loose Convolver installed, via an AES67 ASIO driver.  These channels can be room-corrected using Dirac Live and maybe active crossovers into even more channels. 

How to tell JRiver to add external decoder latency to the audio chain, to sync video with audio?  IEEE 1588 allows timestamps.  Can VST latency compensation read them?
Logged

eve

  • Citizen of the Universe
  • *****
  • Posts: 651
Re: NEW: VST Latency Compensation
« Reply #9 on: September 20, 2023, 01:09:11 pm »

I like to expand this question with a use case scenario.  JRiver plays a movie and uses HDMI audio out to an external Dolby Atmos decoder, such as Arvus H1-D.  Multi-channels of decoded audio will be routed back to the same PC, with JRiver and Hang Loose Convolver installed, via an AES67 ASIO driver.  These channels can be room-corrected using Dirac Live and maybe active crossovers into even more channels. 

How to tell JRiver to add external decoder latency to the audio chain, to sync video with audio?  IEEE 1588 allows timestamps.  Can VST latency compensation read them?

You're sort of close here (I use AES67 extensively). I'm sort of confused by your logic though, how would you route your AES67 stream from the Arvus (that you're essentially dumping into an ASIO device) back into JRiver and why?   

The way I'd set this up is JRiver > HDMI > Arvus > AES67 Stream > PC (how you receive the AES67 stream may matter, there's a number of VSC options) > DAW/VST Host and then from there, figure out how much delay you're going to need to add to JRiver (which is playing the video and outputting audio over HDMI).
You need to host Dirac in this case if your goal is Atmos since standalone is capped at 8ch IIRC?

It's definitely a niche use case (and it's frustrating as heck that we're stuck requiring an Arvus to decode Atmos in real time)


Do you have the Arvus in hand? I've been *really* curious what the latency actually is on it.

AES67 on Windows can be a bit of a mixed bag, it's actually comical how bad / hacky some of the commercial VSCs are (they're just outdated IMO, it's gotten easier to work with PTP timestamps on windows in the last few years).

Logged

mumford

  • Recent member
  • *
  • Posts: 5
Re: NEW: VST Latency Compensation
« Reply #10 on: September 29, 2023, 11:28:42 am »

You're sort of close here (I use AES67 extensively). I'm sort of confused by your logic though, how would you route your AES67 stream from the Arvus (that you're essentially dumping into an ASIO device) back into JRiver and why?   

I am thinking of doing an active crossover.  So, 16 channels from Arvus out, back into a PC and crossover to 26 channels (hybrid 3 ways for the base layer -- passive between tweeter and mid, active for the woofer, just passive for the atmos, and 2 sub channels).  It is not necessary to route audio back to JRiver, can be to a different convolver or even to a separate PC, as long as I have a way to accurately sync them.  But, routing back to JRiver has an advantage, as JRiver can sync audio with video, skip video frames if needed.

Quote
The way I'd set this up is JRiver > HDMI > Arvus > AES67 Stream > PC (how you receive the AES67 stream may matter, there's a number of VSC options) > DAW/VST Host and then from there, figure out how much delay you're going to need to add to JRiver (which is playing the video and outputting audio over HDMI).
You need to host Dirac in this case if your goal is Atmos since standalone is capped at 8ch IIRC?
Planning on doing ART as VST, running on the PC.  Should not have channel limit.

Quote
It's definitely a niche use case (and it's frustrating as heck that we're stuck requiring an Arvus to decode Atmos in real time)


Do you have the Arvus in hand? I've been *really* curious what the latency actually is on it.
No, I don't have one yet. 
Quote
AES67 on Windows can be a bit of a mixed bag, it's actually comical how bad / hacky some of the commercial VSCs are (they're just outdated IMO, it's gotten easier to work with PTP timestamps on windows in the last few years).

Ability to use global PTP timestamps would make things easier.
Logged

eve

  • Citizen of the Universe
  • *****
  • Posts: 651
Re: NEW: VST Latency Compensation
« Reply #11 on: October 02, 2023, 11:02:05 pm »

I am thinking of doing an active crossover.  So, 16 channels from Arvus out, back into a PC and crossover to 26 channels (hybrid 3 ways for the base layer -- passive between tweeter and mid, active for the woofer, just passive for the atmos, and 2 sub channels).  It is not necessary to route audio back to JRiver, can be to a different convolver or even to a separate PC, as long as I have a way to accurately sync them.  But, routing back to JRiver has an advantage, as JRiver can sync audio with video, skip video frames if needed.
Planning on doing ART as VST, running on the PC.  Should not have channel limit.
No, I don't have one yet. 
Ability to use global PTP timestamps would make things easier.

Okay yeah I figured it was something fancy like an active x/over setup.

So that's the thing, you're not getting any benefit going 'back into' JRiver re: sync or skipping video frames. All that is handled by the zone outputting the video (and audio to your Arvus) in this situation. Plus, other than WDM I don't really know what facilities JRiver provides for monitoring live audio. Using JRiver for the playback here is great. You just can't really come back into it in the way I think you're imagining.

This is definitely more suited to a DAW / VST Host. You're very right that it's quite arbitrary where you decide to process the audio, you don't need to be bringing it back into the same computer system that's doing the playback.

In my setup, my workstation, media playback computers, etc don't have physical audio devices actually connected. They output their audio via Dante / AES67 (irrelevant here since I relay my Dante stuff into AES67 / Ravenna and back with the same shared time source) and my various endpoints can pick up those streams. So for example, at my desk, I'm generally connected to both a surround monitoring setup as well as another endpoint that handles headphones. Both get the same multichannel audio 'stream' but of course the headphone endpoint does some fancy downmixing shenanigans', a bit of b2sb crossfeed and correction.


If you're comfortable with Linux, IEEE 1588 support is arguably much better than on Windows, and frankly the entire process for getting AES67 up and running on your own is much closer to documented.

I think Windows will get better. A library I use internally for part of my personal VSC implementation got frankly good PTP support on Windows a while back. It's not available in released builds IIRC but if you can figure out how to build it on your own, it's working quite well.

Merging's free reference VSC for Windows is honestly hot garbage, it's exceedingly unstable in my experience. This is across numerous systems and a bunch of different NICs because, oh yeah, it requires a custom NIC driver to replace your default one so I figured I should see if any of the supported NICs fared better. They didn't. Lots of BSODs on brand new fresh windows installs and different machines.

Audinate's Dante VSC is phenomenal IMO. If you're sticking with Windows it's a solid bet.
Logged

kr4

  • MC Beta Team
  • Citizen of the Universe
  • *****
  • Posts: 720
Re: NEW: VST Latency Compensation
« Reply #12 on: October 03, 2023, 08:41:42 am »

Okay yeah I figured it was something fancy like an active x/over setup.

So that's the thing, you're not getting any benefit going 'back into' JRiver re: sync or skipping video frames. All that is handled by the zone outputting the video (and audio to your Arvus) in this situation. Plus, other than WDM I don't really know what facilities JRiver provides for monitoring live audio. Using JRiver for the playback here is great. You just can't really come back into it in the way I think you're imagining.

This is definitely more suited to a DAW / VST Host. You're very right that it's quite arbitrary where you decide to process the audio, you don't need to be bringing it back into the same computer system that's doing the playback.
I am in the same/similar boat.  Using MC/PC, Arvus and a HAPI.  None Atmos sources originate from MC (using DLART plugin) and route to the HAPI.  However, with Atmos (and other immersive sources), it seems appealing to send the intact files from MC to Arvus and the decoded files back to MC for DLART and output to the HAPI.  If this a problem, how else can one implement it?  Using another source to send the Atmos files still requires MC to receive and process them with Dirac.
Logged
Kal Rubinson
"Music in the Round"
Senior Contributing Editor, Stereophile

eve

  • Citizen of the Universe
  • *****
  • Posts: 651
Re: NEW: VST Latency Compensation
« Reply #13 on: October 03, 2023, 10:15:29 am »

I am in the same/similar boat.  Using MC/PC, Arvus and a HAPI.  None Atmos sources originate from MC (using DLART plugin) and route to the HAPI.  However, with Atmos (and other immersive sources), it seems appealing to send the intact files from MC to Arvus and the decoded files back to MC for DLART and output to the HAPI.  If this a problem, how else can one implement it?  Using another source to send the Atmos files still requires MC to receive and process them with Dirac.

So maybe I'm misunderstanding here but it seems somewhat redundant to go back into MC for the Atmos stuff. I'll explain my thinking.


From what I'm gathering you're looking to use MC's DSP engine along with DLART.

In my mind, that is not needed. You don't really need MC for this stage.

Though you're left with a question, do you want to use MC for DSP on non atmos, and a DSP Host (I'll get into this) for Atmos, or just route everything through a DSP Host?



This is what it might look like.


For non atmos

MC > Merging's MAD (I assume you're taking advantage of this on Windows?) > AES67 Stream #1 > MAD > DSP Host > MAD > AES67 Stream #2 >  HAPI

For Atmos

MC > HDMI > Arvus > AES67 Stream #1 > MAD > DSP Host > MAD > AES67 Stream #2 > HAPI

A "DSP Host" could be a bunch of different things here, you're looking for at minimum, a way to load a VST plugin, but I'm thinking you probably do some DSP in MC that would need to be duplicated in said host, so you'll possibly require additional plugins depending on what the host offers out of the box or your specific needs. Now this could be on the same system you're doing playback on but personally, it seems ideal to be it's own thing.

I'm using MAD as an example because it's probably the most ideal driver to work with the HAPI on Windows (Merging Audio hardware can enable low latency ASIO support on Windows). Your DSP host doesn't 'technically' need to be Windows, it could be a Mac, and hypothetically it could be Linux (though VSTs are going to be a little more complex)


We have 2 AES67 streams going on, 1, is what I'd call the source. It's just the audio data coming out of MC or the Arvus, ideally without any serious processing. The second comes after your DSP host, this is the one you want to be consuming on the HAPI for example, it's been processed. 


Side note: using AES67 as a shorthand for AES67 or Ravenna. The distinction isn't particularly relevant here.
Logged

mumford

  • Recent member
  • *
  • Posts: 5
Re: NEW: VST Latency Compensation
« Reply #14 on: October 05, 2023, 02:00:43 pm »

Okay yeah I figured it was something fancy like an active x/over setup.

So that's the thing, you're not getting any benefit going 'back into' JRiver re: sync or skipping video frames. All that is handled by the zone outputting the video (and audio to your Arvus) in this situation. Plus, other than WDM I don't really know what facilities JRiver provides for monitoring live audio. Using JRiver for the playback here is great. You just can't really come back into it in the way I think you're imagining.
agreed.  But, if a timestamp is being carried through the entire chain....  Anyone knows how the broadcast guys do it?  They must sync video with audio, just to be sure.  No way, they just set a fixed audio delay on a Super Bowl broadcast or whatever.

Quote
This is definitely more suited to a DAW / VST Host. You're very right that it's quite arbitrary where you decide to process the audio, you don't need to be bringing it back into the same computer system that's doing the playback.
agreed
Quote
In my setup, my workstation, media playback computers, etc don't have physical audio devices actually connected. They output their audio via Dante / AES67 (irrelevant here since I relay my Dante stuff into AES67 / Ravenna and back with the same shared time source) and my various endpoints can pick up those streams. So for example, at my desk, I'm generally connected to both a surround monitoring setup as well as another endpoint that handles headphones. Both get the same multichannel audio 'stream' but of course the headphone endpoint does some fancy downmixing shenanigans', a bit of b2sb crossfeed and correction.
noted
Quote
If you're comfortable with Linux, IEEE 1588 support is arguably much better than on Windows, and frankly the entire process for getting AES67 up and running on your own is much closer to documented.

I think Windows will get better. A library I use internally for part of my personal VSC implementation got frankly good PTP support on Windows a while back. It's not available in released builds IIRC but if you can figure out how to build it on your own, it's working quite well.

I am actually much much better with Linux than Windows, as I have been using Linux server for a long, long time.  Windows, I just use it to play games, and default on everything.  The only reason that i even consider Windows is because Dirac does not support Linux.

Quote
Merging's free reference VSC for Windows is honestly hot garbage, it's exceedingly unstable in my experience. This is across numerous systems and a bunch of different NICs because, oh yeah, it requires a custom NIC driver to replace your default one so I figured I should see if any of the supported NICs fared better. They didn't. Lots of BSODs on brand new fresh windows installs and different machines.

Audinate's Dante VSC is phenomenal IMO. If you're sticking with Windows it's a solid bet.

The reason that I am researching Merging's eco system is because Arvus H1-D uses AES67.  We are cooking a DIY Storm Audio box over at ASR.

https://www.audiosciencereview.com/forum/index.php?threads/new-product-arvus-h1-d.47117/

 
Logged

mumford

  • Recent member
  • *
  • Posts: 5
Re: NEW: VST Latency Compensation
« Reply #15 on: October 05, 2023, 02:05:16 pm »

I may add that this Dolby Atmos gymnastic is only required to play video.  I already can decode and play Dolby Atmos audio files on a PC using software.

https://professional.dolby.com/product/media-processing-and-delivery/drp---dolby-reference-player/
Logged

eve

  • Citizen of the Universe
  • *****
  • Posts: 651
Re: NEW: VST Latency Compensation
« Reply #16 on: October 05, 2023, 04:54:04 pm »

agreed.  But, if a timestamp is being carried through the entire chain....  Anyone knows how the broadcast guys do it?  They must sync video with audio, just to be sure.  No way, they just set a fixed audio delay on a Super Bowl broadcast or whatever.


Yep, so AES67's use of PTP with RTP provided the groundwork for these systems actually! It very much exists, look at ST-2110. However, they're sending the video on the network as a stream alongside the audio.
https://en.wikipedia.org/wiki/SMPTE_2110

It was pretty trivial to get video + audio playback sync'd with PTP using a popular media framework, where the audio was directly outputting to AES67 packets, not hitting any virtual audio device. The video transport works well but I've only tried compressed stuff. Hypothetically, with a fast enough network you could do uncompressed / losslessly compressed video transport.




If you're comfortable with Linux.
https://github.com/bondagit/aes67-linux-daemon

This is a good starting place. It uses Merging's Linux driver (really good, not the mess that the free Ravenna Windows one is) with some modifications, and provides an open source replacement for Butler.


A DIY Storm sounds very cool.

It's *really* nice to see other people taking note of AES67, it's been a bit of a journey for me but worthwhile to sort out.






Logged

mumford

  • Recent member
  • *
  • Posts: 5
Re: NEW: VST Latency Compensation
« Reply #17 on: October 05, 2023, 06:28:26 pm »

Thanks for the link.  This may actually work, a server treating video and audio as separate AES67 streams and the final client just compare timestamps and sync, or do, what are required.
Logged

Mitchco

  • MC Beta Team
  • World Citizen
  • *****
  • Posts: 173

eve

  • Citizen of the Universe
  • *****
  • Posts: 651
Re: NEW: VST Latency Compensation
« Reply #19 on: December 06, 2023, 10:55:38 am »

Another possible approach: https://www.audiosciencereview.com/forum/index.php?threads/introducing-hang-loose-convolver-from-accurate-sound.23699/page-7#post-1786373


Quote
I hear you and been meaning to get back to you. I am working with a company that develops custom digital audio I/O boards compatible with Raspberry Pi. Can't go into too much detail yet but likely be a 1U chassis and the number of digital I/O channels will be configurable. Trying for the "Swiss Army" knife of DSP I/O supporting HDMI, TOSLINK, AES, DANTE, USB, and …

Oh man, consider me intrigued.
Logged

eve

  • Citizen of the Universe
  • *****
  • Posts: 651
Re: NEW: VST Latency Compensation
« Reply #20 on: December 06, 2023, 12:29:22 pm »

Thanks for the link.  This may actually work, a server treating video and audio as separate AES67 streams and the final client just compare timestamps and sync, or do, what are required.

You wouldn't even need to actually 'send' the video. Again, the way things work in what I've set up is essentially the playback application syncs itself to the PTP clock on your network. Your audio gets decoded from the video, and instead of sending that audio to a 'device' you push packets containing that audio directly into the network.

Now if you wanted to get really complex you could do something with your DSP node where it sends back a message to the initial playback application saying 'hey, my dsp pipeline takes this long to complete' and offset the video by that amount in the playback application. I'm just spit balling here but essentially, sync the playback pipeline to ptp, playback starts, but it doesn't, instead you send empty audio samples (same channel count + format) to your DSP for say, half a second. Your DSP node tells the playback application 'this took me 3ms from the packet receiving to completion of my dsp pipeline' , then the playback application accounts for said delay, and the actual video + audio stream starts. You could hypothetically cascade this too, if your DSP node sends OUT an AES67 stream, whatever picks that up could also tell the playback application 'look, it takes me 5ms from getting a packet to that sample hitting D/A' and add that into the offset I guess?
Really though, +/- 1ms, 2ms for video sync, it's imperceptible in many respects.




Logged
Pages: [1]   Go Up