INTERACT FORUM

Please login or register.

Login with username, password and session length
Advanced search  
Pages: [1]   Go Down

Author Topic: UPNP syncronized playback at zones using third party renderers  (Read 6941 times)

yoyoc

  • World Citizen
  • ***
  • Posts: 120

I had started another thread last week when I found out about the latest UPNP standard for syncronized playback of zones.

Some of the feedback I got were about the lack of renderers made to the current 3 UPNP standard and thus as the lack of such different renderers use diff clock standards which make it less than excellent for such sync.

I think there could be 2 solutions possible uses as of now which merit the update of MC to the UPNP sync options.  First, the use of PCs as zone renderers (actually there is a thread asking if anyone would be interested in a certain price of PC using linux);  Two, how about updating the UPNP to the sync standard and advising that people interested in said sync playback use the same renderers in all zones,  i.e. all WDTV boxes of same model or all of the same what ever brand.

Does this make sense? or is it just wishful thinking.

The thing is, my experience tells me MC is great...  also many professionals as Gordon Rankin think the same of MC, other software developers have been able to make sync playback, airplay, sooloos, squeezebox.  I know MC does not make hardware, but maybe the requirement of same renderers for all zones could be an answer.

I suggest WDTV boxes as they are around $100, their playback capabilities in terms of formats for music, photos and video are pretty much as large as can be, they are DLNA compatible.

  
Logged

MrC

  • Citizen of the Universe
  • *****
  • Posts: 10462
  • Your life is short. Give me your money.
Re: UPNP syncronized playback at zones using thid party renderers
« Reply #1 on: March 18, 2014, 02:18:22 pm »

It is not sufficient to just place the same renderer in all zones.  Clocks on even the same brand / model of devices will vary enough the sound will get out of sync.
Logged
The opinions I express represent my own folly.

yoyoc

  • World Citizen
  • ***
  • Posts: 120
Re: UPNP syncronized playback at zones using third party renderers
« Reply #2 on: March 18, 2014, 08:58:30 pm »

Too bad
Logged

yoyoc

  • World Citizen
  • ***
  • Posts: 120
Re: UPNP syncronized playback at zones using third party renderers
« Reply #3 on: March 19, 2014, 09:59:44 am »

Then.  If I build a PC with a muti channel audio output card, say 12 channels....

Say, I have 8 channels for a 7.1 in the primary zone, and 2 addotional stereo zones  ( 12 total) all analog out from the PC going into a main 7 channel amp for the 7.1 and a 4 channel amp for the 2 zones....

with MC 19 installed...

¿ Could I have those 3 zones play in sync?


Maybe aI should ask this a separate post...  I will

Logged

mwillems

  • MC Beta Team
  • Citizen of the Universe
  • *****
  • Posts: 5234
  • "Linux Merit Badge" Recipient
Re: UPNP syncronized playback at zones using third party renderers
« Reply #4 on: March 19, 2014, 10:05:48 am »

Then.  If I build a PC with a muti channel audio output card, say 12 channels....

Say, I have 8 channels for a 7.1 in the primary zone, and 2 addotional stereo zones  ( 12 total) all analog out from the PC going into a main 7 channel amp for the 7.1 and a 4 channel amp for the 2 zones....

with MC 19 installed...

¿ Could I have those 3 zones play in sync?




Yes that should work as long as you're using one device for all 12 outputs and you can address the outputs separately in JRiver (i.e. using a non-exclusive audio output mode).  It works because all 12 output DACs would be sharing the exact same clock.  Certain audio output devices can also sync to each other using ADAT/SPDIF or wordclock inputs, but those are usually more expensive. 

There are a few folks who have done exactly this kind of thing on the forum, so hopefully they can weigh in with which interfaces worked for them.
Logged

AndrewFG

  • MC Beta Team
  • Citizen of the Universe
  • *****
  • Posts: 3392
Re: UPNP syncronized playback at zones using thid party renderers
« Reply #5 on: March 19, 2014, 11:01:48 am »

Clocks on even the same brand / model of devices will vary enough the sound will get out of sync.

This is an interesting point... that calls out to be disputed...

The latest versions of the UPnP specifications have a SyncPlay method whereby the Control Point commands the Renderer about a) an exact starting time when a particular stream shall commence playing, and b) a playing speed (normally 100%). This imposes two basic functional requirements on the renderer -- namely that it shall have a clock so that i) it knows the time (based on NTP or SNTP protocols) when it shall start, and ii) it can hold the time throughout the duration of the stream being played so that the playing speed is exactly 100%.

If you are running under v1 of the UPnP specifications then you don't have the SyncPlay command and the renderers do not have a NTP or SNTP synchronized clock. Nevertheless it would theoretically be possible for the MC server to synchronize the play start time of two or more Renderers by holding back its response to the HTTP GET of those players so that all streams start to be transmitted to the renderers at exactly the same time.

However your hypothesis is that even if MC could synchronize renderers by holding back the stream start in this way, it would still be futile because renderers do not have clocks good enough to hold the playing speed at constant 100% throughout the duration of the track. IMHO that is certainly a bold hypothesis...

My own feeling is that renderers must have clocks good enough to keep the playing speed constant at 100% throughout the duration of the track. The reason why I say this is that DAC must have a very tight, low dither, sample clock in order to ensure that the music is rendered without audible artifacts. To put it quite simply, if the clock would be bad enough that two renderers might drift apart over the duration of a track, then probably those renderers will sound like cr*p anyway...

Logged
Author of Whitebear Digital Media Renderer Analyser - http://www.whitebear.ch/dmra.htm
Author of Whitebear - http://www.whitebear.ch/mediaserver.htm

mwillems

  • MC Beta Team
  • Citizen of the Universe
  • *****
  • Posts: 5234
  • "Linux Merit Badge" Recipient
Re: UPNP syncronized playback at zones using thid party renderers
« Reply #6 on: March 19, 2014, 11:16:52 am »

My own feeling is that renderers must have clocks good enough to keep the playing speed constant at 100% throughout the duration of the track. The reason why I say this is that DAC must have a very tight, low dither, sample clock in order to ensure that the music is rendered without audible artifacts. To put it quite simply, if the clock would be bad enough that two renderers might drift apart over the duration of a track, then probably those renderers will sound like cr*p anyway...

I don't have much experience with DLNA or UPNP renderers, but my experience with DACs is that even two DACs made by the same manufacturer may tend to drift apart unless they're synced directly either through ADAT/SPDIF or wordclock.  I've tried the experiment using JRiver's zone link and a few different DACs, and could not achieve a workable sync with two DACs of the same brand (much less two of different brands) unless they were explicitly synced through a supported digital input.  I think some others on the forum tried as well with similar results.  

My initial experiment was with a pair of Fiio e7's, which, fair enough, they're not exactly cutting edge technology.  But one of the other DACs I tested (later on) is very well regarded for it's noise performance (the ODAC), and two of those also did not stay perfectly synced. To be fair the tolerances in my application were very tight: I needed it to be fractions of a millisecond; I got enough drift to get audible echo (probably around 30 milliseconds) after ten or so minutes of playback.

So I'm not sure how to account for the issue exactly, except to note that different DACs have different clock recovery methods: some are truly asynchronous, in which case they use their own internal clock and ignore everything else.  Some DACs attempt to rely purely on the PC's clock.  My understanding is that most modern DACs that aren't truly asynch use an adaptive timing/buffering system where they use a local clock, but try to stay "loosely coupled" to the PC clock.  The "adaptive system" is the one the ODAC uses, and I think the Fiio uses as well.  If I had to guess, I'd suspect that the variable drift may be a result of the adaptive timing/buffering system, but I haven't been able to test two asynch DACs to find out.

Like I said, I'm obviously not an expert on this specific application, so I may be missing something basic that differentiates the DACs in UPnP renderers from USB DACs. I just have a few personal experimental results that suggest that two identical DACs may not be guaranteed to sync even when being fed directly from the same PC via USB, unless they specifically support external sync.  While the DACs I tested did not stay in sync, there may well be DACs which will cheerfully stay in sync (I've heard benchmark claims that their DACs will sync without interconnection). Also, in full disclosure, this may be a limitation of JRiver's zone link, rather than a limitation of the DACs, but I haven't thought of a "better" way to test bit-identical playback to two DACs simultaneously.  

In the spirit of inquiry, if anyone has gotten two DACs to sync perfectly without a wordclock, SPDIF, or ADAT cable connection between them, I would welcome the news (I need to add a few channels to my current setup, and could spend much less that way).
Logged

yoyoc

  • World Citizen
  • ***
  • Posts: 120
Re: UPNP syncronized playback at zones using third party renderers
« Reply #7 on: March 19, 2014, 11:24:50 am »

AndrewFg.

So same as I, you think that maybe using same brand and model renderers could work in sync on the HTTP GET method of sync.
Logged

MrC

  • Citizen of the Universe
  • *****
  • Posts: 10462
  • Your life is short. Give me your money.
Re: UPNP syncronized playback at zones using third party renderers
« Reply #8 on: March 19, 2014, 12:21:09 pm »

There are several reasons why two apparently (from a customer's point of view) identical pieces of hardware will not remain in sync.  This was seen even on identical SB devices prior to the hi-rez timer implementation.   Drift was small but it did occur.

Even the apparently same pieces of hardware can have differently sourced components which behave with some variation.

NTP synchronization is often implemented in clients as a once-per-hour or once-per-day synchronization ntpdate call.  This is different than using ntpd against a high-quality hardware reference clock, or strata 2 or strata 3 server, which speeds up and slows down a "tick".  The one shot updates are not very accurate - and they can't be, because they are missing the historical drift data that is required to correctly bias the tick for a given piece of hardware.  The ntpdates are good enough for setting the clock on boot, and to keep you from missing your appointments, but nowhere near microsecond, let alone milliesecond, accuracy required for accurate sync.  How much drift occurs per-minute?   It can vary (use the ntp tools and watch as drift and bias is adjusted each time a server is polled.  It is very complicated).  For a few minute track, probably very good to excellent sync.  For an hour long opera, expect some drift.

A few millisecond variance between two pieces of hardware is detectable for sync - but a few milliseconds difference is not going to be noticed by any human listening to a single piece of hardware.  Hey... that 20 minute track was 8 milliseconds too fast... I heard it!
Logged
The opinions I express represent my own folly.

AndrewFG

  • MC Beta Team
  • Citizen of the Universe
  • *****
  • Posts: 3392
Re: UPNP syncronized playback at zones using third party renderers
« Reply #9 on: March 19, 2014, 01:00:19 pm »

Hey... that 20 minute track was 8 milliseconds too fast... I heard it!

Obviously you won't hear a pitch difference.

But if you have a wibbly clock you (or at least the golden ears among us) might well hear noise and frequency artefacts due to sample clock dither 'N stuff.

Logged
Author of Whitebear Digital Media Renderer Analyser - http://www.whitebear.ch/dmra.htm
Author of Whitebear - http://www.whitebear.ch/mediaserver.htm

MrC

  • Citizen of the Universe
  • *****
  • Posts: 10462
  • Your life is short. Give me your money.
Re: UPNP syncronized playback at zones using third party renderers
« Reply #10 on: March 19, 2014, 01:04:31 pm »

I don't mean wobble.  Just oh-so-slightly faster or slower than another unit (frequency).
Logged
The opinions I express represent my own folly.

mwillems

  • MC Beta Team
  • Citizen of the Universe
  • *****
  • Posts: 5234
  • "Linux Merit Badge" Recipient
Re: UPNP syncronized playback at zones using third party renderers
« Reply #11 on: March 20, 2014, 08:09:34 am »

A few millisecond variance between two pieces of hardware is detectable for sync - but a few milliseconds difference is not going to be noticed by any human listening to a single piece of hardware.  Hey... that 20 minute track was 8 milliseconds too fast... I heard it!

Exactly, and I think that's the key: it doesn't matter if the clock on a single device is very slightly too fast or slow, but two devices with clocks that aren't extremely close to the same speed would lead to problems in a hurry (even if the difference in their clock speeds wasn't that big). In some applications even a drift of a few tenths of a millisecond between two channels can be quite noticeable, and I don't mean "noticeable to the golden-eared," I mean really, obviously noticeable.  

Small amounts of interchannel delay can result in phase cancellation that can be quite dramatic when the two channels are physically close to each other and reproducing the same or similar material.  For example, with a set of bi-amped speakers crossed over at 800Hz, a difference of around .6 milliseconds between the channels is the difference between full volume at 800 Hz, and a 30 to 40dB null centered on 800 Hz about half an octave wide.  The exact "wrong" delay would be very noticeable and would make speech somewhat difficult to understand.  

Delays in between those extremes would result in partial cancellation, which means that the setup would only sound "right" within about one tenth of a millisecond of the right delay.  Because the phenomenon is periodic, if one device was just slightly faster than another, the constantly increasing interchannel delay would drift from good, to bad, and then wrap back around to good eventually, and then back to bad, until it eventually got far enough apart that the ear perceived it as an echo.  Constantly drifting, slowly, in and out of the good and bad pockets would be noticeable, even if you never got enough cumulative delay to get to echo.  

That may sound like an extreme example from an unusual use case, but the same (or a very similar) logic applies to interchannel delay in surround setups.  If the center channel or the sub start to drift away from the mains you would get a similar effect (although it would take much longer to reach "bad" on the sub due to the longer wavelengths, but it would stay bad longer at a time).  If you want an illustration of a worst case, try flipping the polarity on just one of your speakers in your surround setup to simulate that speaker having drifted 180 degrees out of phase; then imagine that the sound keeps transitioning from normal to inverted at intervals throughout playback.  The bottom line is that in-room interchannel drift can potentially be quite noticeable even in very small amounts.

It's much less of an issue if you're just trying to get sync between two different rooms, there the distance delays will tend to confound any attempt at really perfect syncing anyway, and the other room's sound is quiet enough and far enough behind the direct in-room sound that your ear can separate them, so you can have a much looser tolerance.  But any attempt at syncing several playback sources that will share a room would probably need to be relatively tight to get good results.
Logged

AndrewFG

  • MC Beta Team
  • Citizen of the Universe
  • *****
  • Posts: 3392
Re: UPNP syncronized playback at zones using third party renderers
« Reply #12 on: March 20, 2014, 11:14:12 am »

^

Sound travels about 1 meter in 3 milliseconds.

So as you walk between a sound source in room A and a sound source in room B, for every 1 meter walked the sound from room A will arrive 3 msec later, and the sound from room B will arrive 3 msec earlier.

So if you have two rooms say 5 meters apart, (imagine yourself walking between the two rooms), due to this distance effect the maximum possible accuracy of time sync'ing is 2 x 3 x 5 = 30 msec.

The above is plain physics and neither UPnP nor anything else can change that. So the only question that could validly be directed at UpnP renderers is whether, over the duration of a track two reasonable quality commercially available renderers would be able to keep time with each other within this 30 msec error band, or not. This would imply the two clocks to be aligned to 166ppm over a 3 minute track, or 3ppm over a 150 minute movie.

Personally I don't have the equipment to test this. But if you guys conclude that such accuracies are impossible for two reasonable quality commercially available renderers, then please let us close this thread now, since there is nothing that any future development of MC could ever hope to resolve.

Logged
Author of Whitebear Digital Media Renderer Analyser - http://www.whitebear.ch/dmra.htm
Author of Whitebear - http://www.whitebear.ch/mediaserver.htm

mwillems

  • MC Beta Team
  • Citizen of the Universe
  • *****
  • Posts: 5234
  • "Linux Merit Badge" Recipient
Re: UPNP syncronized playback at zones using third party renderers
« Reply #13 on: March 20, 2014, 11:32:40 am »

^

Sound travels about 1 meter in 3 milliseconds.

So as you walk between a sound source in room A and a sound source in room B, for every 1 meter walked the sound from room A will arrive 3 msec later, and the sound from room B will arrive 3 msec earlier.

So if you have two rooms say 5 meters apart, (imagine yourself walking between the two rooms), due to this distance effect the maximum possible accuracy of time sync'ing is 2 x 3 x 5 = 30 msec.

The above is plain physics and neither UPnP nor anything else can change that. So the only question that could validly be directed at UpnP renderers is whether, over the duration of a track two reasonable quality commercially available renderers would be able to keep time with each other within this 30 msec error band, or not. This would imply the two clocks to be aligned to 166ppm over a 3 minute track, or 3ppm over a 150 minute movie.

Personally I don't have the equipment to test this. But if you guys conclude that such accuracies are impossible for two reasonable quality commercially available renderers, then please let us close this thread now, since there is nothing that any future development of MC could ever hope to resolve.

Just to be clear, I definitely think it is at least in the realm of possibility to achieve loose sync (within 30-40 msec or so) with two renderers in two different rooms, at least with normal music listening (which allows for resyncing every 3 to 5 minutes).  I say that because JRiver currently can get close to the 30-40msec tolerance with its own clients through zone link (at least with relatively short songs).  It does get a little wacky on longer audio tracks (e.g. ten minutes+), and I haven't tested the current sync capability with a movie length item because zonelink doesn't currently work with video.  But the current capability is in the ballpark (or adjacent to the ballpark) of "good enough" for inter-room sync of music, which suggests with some refinement to account for interchannel drift (somehow) it might be able to get all the way there.  

But while inter-room sync could probably be made "good enough" for normal music listening, intra-room sync might not be able to get there with the state of current hardware (see my last paragraph in the prior post).  My point was that the tolerances with two devices in two different rooms vice the tolerance in the same room are orders of magnitude different, and that at some point the sync problem won't be solveable below a certain tolerance because there is currently some irreducible drift introduced by different devices.  

Sorry if I confused things  :-[

BTW- It's relatively easy to test how well the current sync functionality works, if you're interested, because 30-40 msec is right around the echo threshold.  If you stand equidistant between two sets of speakers the sound from both should be reaching you at exactly the same time. If you stand there (maybe in the door between the rooms) and hear substantial echo, you know the sync is likely more than 30msecs off.  That's one of the ways I know that the current sync isn't quite there yet.
Logged

AndrewFG

  • MC Beta Team
  • Citizen of the Universe
  • *****
  • Posts: 3392
Re: UPNP syncronized playback at zones using third party renderers
« Reply #14 on: March 20, 2014, 11:59:40 am »

BTW- It's relatively easy to test how well the current sync functionality works, if you're interested, because 30-40 msec is right around the echo threshold.  If you stand equidistant between two sets of speakers the sound from both should be reaching you at exactly the same time. If you stand there (maybe in the door between the rooms) and hear substantial echo, you know the sync is likely more than 30msecs off.  That's one of the ways I know that the current sync isn't quite there yet.

Just to make sure that I understand you right:

  • Do you observe that (with today's MC and today's renderers) you hear such echo already at the start of a track? In which case there may be things that JRiver could do to alleviate that.
  • Or do you observe such echo only towards the end of a track? In which case there is probably nothing that JRiver could do anyway (even with v2 or v3 of the UPnP specifications)...
Logged
Author of Whitebear Digital Media Renderer Analyser - http://www.whitebear.ch/dmra.htm
Author of Whitebear - http://www.whitebear.ch/mediaserver.htm

mwillems

  • MC Beta Team
  • Citizen of the Universe
  • *****
  • Posts: 5234
  • "Linux Merit Badge" Recipient
Re: UPNP syncronized playback at zones using third party renderers
« Reply #15 on: March 20, 2014, 12:07:15 pm »

Just to make sure that I understand you right:

  • Do you observe that (with today's MC and today's renderers) you hear such echo already at the start of a track? In which case there may be things that JRiver could do to alleviate that.
  • Or do you observe such echo only towards the end of a track? In which case there is probably nothing that JRiver could do anyway...


(1) After adjusting the zonelink sync slider, there is no echo at the start of the track in my system (although to be clear all of my DLNA renderers are MC instances on computers, being controlled by another computer running JRiver as a DLNA controller, so YMMV using stand-alone renderers).
(2) The echo (for me) typically begins at some point during playback and intensifies as a long track continues.  I haven't taken notes as to when exactly the echo appears, but I can do so if it would be helpful.  My recollection is that it takes a few minutes (maybe five?). Whenever the song changes, there's a slight pause and things start up synced again.  
Logged
Pages: [1]   Go Up