INTERACT FORUM

Please login or register.

Login with username, password and session length
Advanced search  
Pages: [1] 2   Go Down

Author Topic: JRiver audio testing dilema  (Read 32150 times)

Hilton

  • Regular Member
  • Citizen of the Universe
  • *****
  • Posts: 1291
JRiver audio testing dilema
« on: August 21, 2015, 06:24:54 am »

Hi All,

I have a small dilemma and don't want to dig a hole any bigger or start a religious flame war so need some advice.

As some of you would know I attended an Audio meet last weekend in Sydney.
JRiver was compared against Audirvarna+ and Pure Audio in a single blind test.

In my opinion the test wasn't run as well as it could have been, but there was limited time and it was done as fairly could be expected for an audio enthusiast group.

Unfortunately for JRiver it garnered only 4 votes in the player comparison test out of about 35 people. This was pretty consistent through the testing with JRiver getting much fewer votes but scoring better in a couple that surprised some.

Tom the club president has suggested my review and leaning to JRiver is biased and I should probably include the low scores in my post... He's not wrong, I am biased and I do try to have an open mind, but not at the expense of JRiver publicity.

How people can rate the Audirvana+ player higher than JRiver when Audirvana+ clearly glitched and stumbled twice on playback is a mystery to me.  The glitches automatically disqualified it in my mind, no matter how much I enjoyed it's sound, and it did sound good, but I wouldn't say better than JRiver.

I drafted a massive email to go back to Tom the president of the Audio club that ran the testing, but I just don't think it's going to help anything.

Should I just post the imperfect test results in the thread I made and we should just cop-it-on-the-chin that JRiver didn't get the votes?

Or should I risk a holy war?

Here's where I got to with the email, but I just think its not worth it. Some of it I would have published in the forum but I'm now leaning towards making a very light and humorous admission that it was a fun experiment and JRiver just wasn't up to it on the day and leave it at that.

It's a can of worms and I just think we should let sleeping dogs lay...... What are your thoughts....??

------------------

Hi Tom,

I have Jim’s blessing to use my own judgement.  Not to make a big deal out of it, but I do want to be fair and open minded and acknowledge that JRiver didn’t test well in the scenario we were using as I believe that’s the right thing to do.
So I’ll put the results up and the link to Darko’s review. (Was Darko there? It would have been good if he had been there to make it more objective as a review writer, I apologise if he was there as I wasn’t introduced)

I could leave it at that, but I don’t think that would be fair.
I have no wish to belittle the testing as it is very difficult thing to do without some sort of bias creeping in, and under the test conditions I don’t believe with the limited time that you could have done anything differently.

Maybe I’m grasping at straws, but I believe the only thing short of a double blind test that may have influenced the voting was to run the two tracks in three lots, each in different orders (9 times)with voting anonymously (written down) and then discussed at the end.  

The reason for this is that there’s a few forms of bias that trickle into these things that can be, for the most part, eliminated with this approach.
These things are also hotly debated as you’re aware and I’m on the more analytical open minded side of things. It’s not a religious objective vs subjective thing for me.
As you know I went in being sceptical but hoping to be able to hear a difference. Many of the attendees would have been in this camp.

I think the testing was about as fair as it could be, and perhaps my suggestions would make no difference, but I do think it’s worth pointing out the forms of bias that can affect such testing.  
This still doesn’t improve JRiver’s position in the scoring, but it perhaps more fairly reflects the test conditions at least.

The forms of bias that I believe need to be considered are:

Confirmation Bias    
Being pre-disposed to liking a specific thing to confirm your own belief – some of the tests were not blind tests.
There was an element of this, but I think the test was as fair as could be under the circumstances.

Mob Mentality Bias vs Im a Unique Weirdo Individual Mentality Bias
Two sides to this obviously; Following the herd or the mob is as big a problem as Confirmation Bias.
Without full double blind and anonymous testing there will be some bias in the voting, its human nature.
There may well have been an element of this. Anonymous voting and double blind testing is the only way to avoid it.

Memory Bias
Being pre-disposed to a certain sound because the user already extensively uses one of the tested products - I would fall into this category as I could clearly hear my preferred sound and it was evident in my voting. Maybe a fluke. (I picked JRiver 3 times)
What would be interesting is to re-survey the attendees as to what their existing preferred product is and put that in the mix of results to see if there is any correlation. (there’s that analytical side of me )

Systematic Bias      
From obvious defects or flaws in the testing – This one is a bit easier, however, for example, had Audivarna+ not glitched I may have given it votes as I quite liked it but I immediately dismissed it because of the glitches.
The other systematic bias I could say existed though this may be controversial, was the order of the players being tested.  I heard someone else say during the testing that statistically people will vote less for the 1st tested player as opposed to the 2nd and 3rd where there is no real discernible difference. (JRiver was the 1st player we auditioned with 2 tracks before moving on to the next player - another test was done with a single track and the players all back to back but it was no longer a blind test with Tom telling us which player was which before playing the track - again opening up another risk of bias )
I researched this and it’s hard to find any evidence for this in Audio testing, but I’m sure there will be PHD papers out there from statisticians that will argue the point till blue in the face.. but you know that saying, “Lies Damn Lies and Statistics” 

Random Error and Placebo
Audio Memory can be long term yes, but in a situation where users are trying to hear a difference where none or very subtle differences exist, placebo and random error are certain to creep in.



Logged

jmone

  • Administrator
  • Citizen of the Universe
  • *****
  • Posts: 14254
  • I won! I won!
Re: JRiver audio testing dilema
« Reply #1 on: August 21, 2015, 06:46:55 am »

Mate I think you are on a flogging to nothing.   I enjoyed your reports and sorry I missed it... well sort of.  I'm happy that DACs make a difference, but unless the player code is buggy there is 0 difference with any of the "bit perfect" players, or their $1,000 usb cables feeding the DAC.  I agree that their methodology is not up to double blind but you are dealing with religious fanatics.  I'd suggest that you propose a double blind "Mega Player Test" for the next met.

....and then in the MC DSP setting push the boost up 1/2 db and MC will win  8)
Logged
JRiver CEO Elect

JimH

  • Administrator
  • Citizen of the Universe
  • *****
  • Posts: 71294
  • Where did I put my teeth?
Re: JRiver audio testing dilema
« Reply #2 on: August 21, 2015, 06:56:42 am »

I drafted a massive email to go back to Tom the president of the Audio club that ran the testing, but I just don't think it's going to help anything.

Should I just post the imperfect test results in the thread I made and we should just cop-it-on-the-chin that JRiver didn't get the votes?

Or should I risk a holy war?

My Hero.  Sally forth.  I eagerly await thy return.

Logged

Hilton

  • Regular Member
  • Citizen of the Universe
  • *****
  • Posts: 1291
Re: JRiver audio testing dilema
« Reply #3 on: August 21, 2015, 06:58:37 am »

My Hero.  Sally forth.  I eagerly await thy return.

Thanks Jim that doesn't really help.. LOL but it is darn funny.
I just sent you another email :)
Logged

Hilton

  • Regular Member
  • Citizen of the Universe
  • *****
  • Posts: 1291
Re: JRiver audio testing dilema
« Reply #4 on: August 21, 2015, 06:59:52 am »

Mate I think you are on a flogging to nothing.   I enjoyed your reports and sorry I missed it... well sort of.  I'm happy that DACs make a difference, but unless the player code is buggy there is 0 difference with any of the "bit perfect" players, or their $1,000 usb cables feeding the DAC.  I agree that their methodology is not up to double blind but you are dealing with religious fanatics.  I'd suggest that you propose a double blind "Mega Player Test" for the next met.

....and then in the MC DSP setting push the boost up 1/2 db and MC will win  8)

Yes I know... :) You can't reason with the devil.. :)
Logged

JimH

  • Administrator
  • Citizen of the Universe
  • *****
  • Posts: 71294
  • Where did I put my teeth?
Re: JRiver audio testing dilema
« Reply #5 on: August 21, 2015, 07:03:08 am »

On the question of whether our Windows audio is different than our Mac audio, here's a nice test done by user mitchco:

http://www.computeraudiophile.com/content/513-jriver-mac-vs-jriver-windows-sound-quality-comparison/
Logged

mattkhan

  • MC Beta Team
  • Citizen of the Universe
  • *****
  • Posts: 3949
Re:
« Reply #6 on: August 21, 2015, 07:03:31 am »

I would argue there are only downsides here :) so publish the facts without commentary/justification/your explanation of the results and leave people to make up their own mind.
Logged

Hilton

  • Regular Member
  • Citizen of the Universe
  • *****
  • Posts: 1291
Re: JRiver audio testing dilema
« Reply #7 on: August 21, 2015, 07:05:55 am »

On the question of whether our Windows audio is different than our Mac audio, here's a nice test done by user mitchco:

http://www.computeraudiophile.com/content/513-jriver-mac-vs-jriver-windows-sound-quality-comparison/

As I just sent in my email.

I’ve read that one before, it’s a very interesting read and why I was questioning Tom about his statement on MAC vs PC.
I still can’t help feeling that something was afoot with the testing, not intentionally mind you, but you can never be sure.

I’m also still not certain that there isn’t more to it than bits in bits out.

Timing is important, there’s Jitter which shouldn’t be audible but can be, there’s that Dith…ing  word that we don’t like to talk about. 
And there’s truncation and rounding, related to dithering, and buffering differences which Im not convinced either way can or can’t be heard.

Some of the above may not have been measurable in Mitcho’s tests simply due to the resolution of the recording and testing method.
The errors or timing issues could have been filtered or cancelled out as he was recording at exactly the same bit depth and sample rate from memory.
It would be interesting to see Mitcho do the same thing but with say double the bit depth and sample rate on the recording side. (and into a different DAC)

I don’t know!!  Sometimes you just have to sit and enjoy the music.
Logged

jmone

  • Administrator
  • Citizen of the Universe
  • *****
  • Posts: 14254
  • I won! I won!
Re: JRiver audio testing dilema
« Reply #8 on: August 21, 2015, 07:08:11 am »

I'll see Jim's Image and raise you one.

Logged
JRiver CEO Elect

Hilton

  • Regular Member
  • Citizen of the Universe
  • *****
  • Posts: 1291
Re: JRiver audio testing dilema
« Reply #9 on: August 21, 2015, 07:13:58 am »

oohhh its gone public... I'm in for it now.... Just like Nathans Lamb to the Slaughter :) I can see the hordes coming with pitch forks and torches to lynch me now.  ;D  I better run for the hills.....
Logged

JimH

  • Administrator
  • Citizen of the Universe
  • *****
  • Posts: 71294
  • Where did I put my teeth?
Re: JRiver audio testing dilema
« Reply #10 on: August 21, 2015, 07:15:09 am »

I would argue there are only downsides here :) so publish the facts without commentary/justification/your explanation of the results and leave people to make up their own mind.
Even the facts will be disputed.  A bit is a bit, for example.  A PC's timing can't affect a DAC with a buffer and a clock, for another.
Logged

jmone

  • Administrator
  • Citizen of the Universe
  • *****
  • Posts: 14254
  • I won! I won!
Re: JRiver audio testing dilema
« Reply #11 on: August 21, 2015, 07:15:59 am »

I don't buy the jitter argument.  Sure there can be jitter in the PC but it is just feeding the DAC's buffer.  Unless the DACs Bufffer runs out how on earth does it matter the inter packet timing that the packets arrive into that buffer as long it is in the right order?  It is the DAC's job to then covert the data from it's buffer into a analogue wave form.  Must be a the words worst DAC if the inter packet timing from a PC to its buffer makes the slightest difference in how it processes it's own queue.
Logged
JRiver CEO Elect

Hilton

  • Regular Member
  • Citizen of the Universe
  • *****
  • Posts: 1291
Re: JRiver audio testing dilema
« Reply #12 on: August 21, 2015, 07:17:16 am »

Even the facts will be disputed.  A bit is a bit, for example.  A PC's timing can't affect a DAC with a buffer and a clock, for another.

You must be feeling good Jim>?  Ready to do some moderating? LOL
Logged

jmone

  • Administrator
  • Citizen of the Universe
  • *****
  • Posts: 14254
  • I won! I won!
Re: JRiver audio testing dilema
« Reply #13 on: August 21, 2015, 07:21:19 am »

oohhh its gone public... I'm in for it now.... Just like Nathans Lamb to the Slaughter :) I can see the hordes coming with pitch forks and torches to lynch me now.  ;D  I better run for the hills.....

Yup - You're rogered
Logged
JRiver CEO Elect

mwillems

  • MC Beta Team
  • Citizen of the Universe
  • *****
  • Posts: 5168
  • "Linux Merit Badge" Recipient
Re: JRiver audio testing dilema
« Reply #14 on: August 21, 2015, 07:25:48 am »

I would argue there are only downsides here :) so publish the facts without commentary/justification/your explanation of the results and leave people to make up their own mind.

I agree, but included in the "facts" are the methodology as well as the results.  I think you should post the results *and* the entire methodology.  One is worthless without the other.  

Of particular note, if the players weren't carefully volume matched the results are not particularly meaningful.  JRiver tends to attenuate slightly to account for intersample overages to leave headroom, etc.  I don't think double blind is absolutely necessary to get good results (an appropriately structured single blind can be almost as good), but failure to match levels (not by ear, with an SPL meter and pink noise) is going to be fatal to any test validity.  Less than half a dB volume difference has been shown to be enough to create preference with identical source material and playback chains.

Timing is important, there’s Jitter which shouldn’t be audible but can be, there’s that Dith…ing  word that we don’t like to talk about. 
And there’s truncation and rounding, related to dithering, and buffering differences which Im not convinced either way can or can’t be heard.

I don't buy the jitter argument.  Sure there can be jitter in the PC but it is just feeding the DAC's buffer.  Unless the DACs Bufffer runs out how on earth does it matter the inter packet timing that the packets arrive into that buffer as long it is in the right order?  It is the DAC's job to then covert the data from it's buffer into a analogue wave form.  Must be a the words worst DAC if the inter packet timing from a PC to its buffer makes the slightest difference in how it processes it's own queue.

I have yet to see any actual evidence that two different bit-perfect players can have different jitter profiles.  Jitter isn't mystical, it can be measured, and all claims (that I've seen) that different bit-perfect players have statistically significant differences in jitter performance are unsupported by measurements. The closest to evidence the "jitter" crowd has is an article by a DAC designer that explains how it could happen, but includes no measurements or actual proof.  If anyone has seen any such studies or measurements, I'd actually like to see them as it would be nice to think this argument wasn't a complete red herring.

Different dithering algorithms can create different sound, but is unlikely to be audible at normal playback levels.  But, that at least is a real possibility in that it is something that could legitimately be different between the players.  If dithering is happening, truncation or rounding shouldn't be happening, right?  The whole point of dither is to prevent noise from truncation or rounding.

It's possible that JRiver may have sounded worse in that environment than the other players.  My money is on volume difference;  it's possible the lack of double blindness may have contaminated it (especially if the crowd knew the fellow throwing the switches well enough to read his expression/body language), but a difference that significant probably resulted from something actually audible I'd guess.
Logged

Hilton

  • Regular Member
  • Citizen of the Universe
  • *****
  • Posts: 1291
Re: JRiver audio testing dilema
« Reply #15 on: August 21, 2015, 07:33:51 am »

This>>>>  :)
Quote
but a difference that significant probably resulted from something actually audible I'd guess.
Logged

JimH

  • Administrator
  • Citizen of the Universe
  • *****
  • Posts: 71294
  • Where did I put my teeth?
Re: JRiver audio testing dilema
« Reply #16 on: August 21, 2015, 07:35:58 am »

I'll see Jim's Image and raise you one.
That got a good laugh here.
Logged

JimH

  • Administrator
  • Citizen of the Universe
  • *****
  • Posts: 71294
  • Where did I put my teeth?
Re: JRiver audio testing dilema
« Reply #17 on: August 21, 2015, 07:36:48 am »

You must be feeling good Jim>?  Ready to do some moderating? LOL
On this forum, a bit is, and forever shall be, a bit.
Logged

mwillems

  • MC Beta Team
  • Citizen of the Universe
  • *****
  • Posts: 5168
  • "Linux Merit Badge" Recipient
Re: JRiver audio testing dilema
« Reply #18 on: August 21, 2015, 07:38:18 am »

This>>>>  :)

Next time, bring an SPL meter and a usb stick with a pink noise clip  ;D
Logged

JimH

  • Administrator
  • Citizen of the Universe
  • *****
  • Posts: 71294
  • Where did I put my teeth?
Re: JRiver audio testing dilema
« Reply #19 on: August 21, 2015, 07:39:25 am »

I better run for the hills.....
Don't forget your horse.
Logged

Hilton

  • Regular Member
  • Citizen of the Universe
  • *****
  • Posts: 1291
Re: JRiver audio testing dilema
« Reply #20 on: August 21, 2015, 07:39:29 am »

Logged

Hilton

  • Regular Member
  • Citizen of the Universe
  • *****
  • Posts: 1291
Re: JRiver audio testing dilema
« Reply #21 on: August 21, 2015, 07:40:10 am »

Don't forget your horse.

OK... :) thanks...... I think....
Logged

Hilton

  • Regular Member
  • Citizen of the Universe
  • *****
  • Posts: 1291
Re: JRiver audio testing dilema
« Reply #22 on: August 21, 2015, 08:14:21 am »

Quote
The closest to evidence the "jitter" crowd has is an article by a DAC designer that explains how it could happen, but includes no measurements or actual proof.  If anyone has seen any such studies or measurements, I'd actually like to see them as it would be nice to think this argument wasn't a complete red herring.

I did read a very detailed paper on Jitter awhile ago. I may have even posted parts of it on here now I think of it... let me see...
Logged

Hilton

  • Regular Member
  • Citizen of the Universe
  • *****
  • Posts: 1291
Re: JRiver audio testing dilema
« Reply #23 on: August 21, 2015, 08:18:51 am »

Yep here tis...

I too tried xxxx many moons ago with no perceptible improvement, however it did screw up stability of the system and I quickly removed it.

My knowledge has improved since and I'm tempted to have another more technical analysis of it.

I just read a whole bunch of latency and jitter info and found this one from Yamaha particularly useful. It's still sinking in as it's quite a complex subject.

http://www.yamahaproaudio.com/global/en/training_support/selftraining/audio_quality/chapter5/10_jitter/

Particularly interesting is this comment.

From my understanding only the input and output can cause jitter, once in the DSP engine/domain timing is irrelevant.


audibility of jitter
audibility of jitter

 Assuming a 0dBfs sine wave audio signal with a frequency of 10kHz as a worst case scenario, a jitter signal with a peak level of 5ns will generate a combined A/D and D/A jitter noise peak level of:

 EA/D+D/A = 20.log(2.5.10-92.π.10.103) = -64dBfs

 When exposed to listeners without the audio signal present, this would be clearly audible. However, in real life jitter noise only occurs with the audio signal in place, and in that case masking occurs: the jitter noise close to the audio signal frequency components will be inaudible, so the average audio signal’s spectrum will mask a significant portion of the jitter noise.

 Note that the predicted level is the jitter noise peak level generated by 0dBfs audio signals. In real life, the average RMS level of jitter noise will be lowered by many dB’s because of the audio program’s crest factor and the system’s safety level margins used by the sound engineer. Music with a crest factor of 10dB played through a digital audio system with a safety level margin of 10dB will then generate jitter noise below -84dBfs.

 The audibility of jitter is a popular topic on internet forums. Often a stand-alone digital mixing console is used in a listening session, toggling between its internal clock and external clock. In these comparisons it is important to know that such comparison sessions only work with a stand-alone device. If any other digital device is connected to the mixer, then clock phase might play a more significant role in the comparison results than jitter.

 In uncontrolled tests, many subjective and non-auditory sensations have a significant influence on the result. More details on quality assessment methods are presented in chapter 9.

 In multiple clinical tests, the perception threshold of jitter has been reported to lie between 10 nanoseconds(* 5U) for sinusoidal jitter and 250 nanoseconds(* 5V) for noise shaped jitter - with actual noise shaped jitter levels in popular digital mixing consoles being below 10 nanoseconds.


 

and this from that thread...

Jitter is real, audible in some circumstances, and measurable. This is an excellent article from a non-woo-woo engineer on the subject (he also happened to be one of a very few people with instruments actually capable of taking decent jitter measurements): http://nwavguy.blogspot.com/2011/02/jitter-does-it-matter.html

The vast majority of quality modern DACs (including several models I can name under $100) are more or less completely insensitive to jitter: they show jitter distortion well below -100dBFS with normal playback.  Nicer DACs will often have jitter measurements below -120dBFS where you're getting into "resistor noise" levels of distortion.  If you want to shell out for a Benchmark DAC, they've got it down into the -130's.  No one has ever demonstrated that jitter distortion below -100dBFS is even audible in any way.

IMO, it's just not a real problem anymore except on cheap or poorly designed equipment, and becomes less of a problem each year.

Glynor, I'm not a physicist or jitter expert, but my understanding is that those timing measurements are a convention that are effectively "backed out" from the distortion measurements.  Nwavguy gets into it a little bit in his article, but there are techniques to identify and measure the jitter distortion of an audio signal (as distinct from other types of distortion), and then that distortion level is referenced to digital full-scale and "translated" into time units based on the sampling rate.  So it's a "derived" measurement, and kind of a silly one IMO, since distortion (of which jitter is just one variety) is customarily expressed in dB below fullscale or a percentage, so converting to the time unit seems designed to obscure the issue.  Check out this article for some explanation: http://www.anedio.com/index.php/article/measuring_jitter

So he probably wasn't consciously being disingenuous, just following along with a silly convention.  
Logged

mwillems

  • MC Beta Team
  • Citizen of the Universe
  • *****
  • Posts: 5168
  • "Linux Merit Badge" Recipient
Re: JRiver audio testing dilema
« Reply #24 on: August 21, 2015, 08:32:57 am »

Just so we don't misunderstand each other: jitter distortion is 100% real, measurable, and generally accepted as a source of audio distortion.  I think it's less relevant than it used to be because DAC design has improved, but there's no question that it exists and is a meaningful parameter in assessing audio quality.

However that's not what I was talking about above.  No one questions that jitter exists and can be audible.  What I question and what is entirely unproven (as far as I can tell) is that different bit-perfect software players can meaningfully affect jitter.  The timing error of packets entering the analog stage of the DAC will be almost entirely the product of the DAC's own clock or adaptive reclocking mechanism, or if it doesn't have one, the clock of last resort (the USB bus clock).  The speed at which the software feeds the ring buffer has no logical relationship to the rate at which the DAC or driver empties the buffer, unless the packets can't come in fast enough and then the distortion is unsubtle (and not technically jitter, just a plain old drop out). 

I have seen no credible measurements showing that different software players result in different jitter measurements (I'm not even sure I've seen any incredible measurements), and, even conceding that such a thing is real and measurable (but as yet unmeasured) I have also seen no credible studies that would tend to show such a thing is audible.

To be clear, I would welcome a competently performed test that showed that different software players produced meaningfully different jitter profiles.  It would answer a lot of unanswered questions. But no one I've talked to about this specific issue related to jitter has offered any evidence other than John Swenson's theorizing (without any supporting experimental data of any kind).
Logged

glynor

  • MC Beta Team
  • Citizen of the Universe
  • *****
  • Posts: 19608
Re: JRiver audio testing dilema
« Reply #25 on: August 21, 2015, 08:33:56 am »

I agree, but included in the "facts" are the methodology as well as the results.  I think you should post the results *and* the entire methodology.  One is worthless without the other.  

The "facts" in this case aren't facts.

Statistics aren't statistics with one data point, and you learn absolutely nothing without repeat-ability. There is basically nothing more useless than a single data point.

People commonly think "well, it is better than nothing" but that's completely opposite of the case. It is worse than nothing, because it implies "truth" where none exists. Your sample could be all noise, and if you repeated the exact same test minutes later, you could get the opposite result. It is impossible to know!
Logged
"Some cultures are defined by their relationship to cheese."

Visit me on the Interweb Thingie: http://glynor.com/

Hilton

  • Regular Member
  • Citizen of the Universe
  • *****
  • Posts: 1291
Re: JRiver audio testing dilema
« Reply #26 on: August 21, 2015, 08:40:23 am »

Just so we don't misunderstand each other: jitter distortion is 100% real, measurable, and generally accepted as a source of audio distortion.  I think it's less relevant than it used to be because DAC design has improved, but there's no question that it exists and is a meaningful parameter in assessing audio quality.

However that's not what I was talking about above.  No one questions that jitter exists and can be audible.  What I question and what is entirely unproven (as far as I can tell) is that different bit-perfect software players can meaningfully affect jitter.  The timing error of packets entering the analog stage of the DAC will be almost entirely the product of the DAC's own clock or adaptive reclocking mechanism, or if it doesn't have one, the clock of last resort (the USB bus clock).  The speed at which the software feeds the ring buffer has no logical relationship to the rate at which the DAC empties the buffer, unless the packets can't come in fast enough and then the distortion is unsubtle (and not technically jitter, just a plain old drop out).

I have seen no credible measurements showing that different software players result in different jitter measurements (I'm not even sure I've seen any incredible measurements), and, even conceding that such a thing is real and measurable (but as yet unmeasured) I have also seen no credible studies that would tend to show such a thing is audible.

If you read what I posted above in the yamaha training guide, jitter cannot be present in the DSP, only on the input and output.  sorry that wasn't meant to sound the way it did... you know what I mean :)

here :)
Jitter by Hilton, on Flickr
Logged

mwillems

  • MC Beta Team
  • Citizen of the Universe
  • *****
  • Posts: 5168
  • "Linux Merit Badge" Recipient
Re: JRiver audio testing dilema
« Reply #27 on: August 21, 2015, 08:47:28 am »

If you read what I posted above in the yamaha training guide, jitter cannot be present in the DSP, only on the input and output.

That's kind of my point.  I agree and that's why I think "software player induced jitter" is probably a myth. But the thesis of the "software player jitter" theorists is that something the software does when feeding the DAC creates jitter at the output.  My point is that bitperfect software doesn't really have the ability to affect the timing of the DAC's output stage (unless it doesn't provide enough info), which you are confirming.  The alternative theory is that electrical activity in the computer screws up the timing of the DAC's output stage (creating jitter), and that different software create different levels of electrical activity.  I don't think that's a very good theory either for different reasons.

But regardless of the theoretical explanation, the real point is that there are no measurements (that I know of) showing meaningfully different jitter results wth different software players, which you would expect to be achievable if different software players had differing results on jitter.

Check out: http://archimago.blogspot.com/2013/03/measurements-hunt-for-load-induced.html
http://archimago.blogspot.com/2015/08/measurements-audiophile-sound-and.html#more

I am easy to persaude, show me credible empirical measurements with modern equipment (not crappy old SPDIF interfaces from the 90's), and I will come along  ;D  I've seen too many people convinced they heard huge differences when deliberately presented with identical playback chains; the ear is easy to fool, instruments are much harder to fool.

sorry that wasn't meant to sound the way it did... you know what I mean :)

No worries, I just want to make sure we aren't talking past each other
Logged

Hilton

  • Regular Member
  • Citizen of the Universe
  • *****
  • Posts: 1291
Re: JRiver audio testing dilema
« Reply #28 on: August 21, 2015, 09:11:06 am »

Here's something I didn't know, is this true? (from computer audio forum)

Audirvana is using iZotope for sample rate conversion and dithering, which offers objectively better performance than JRiver's SSRC resampling and 1LSB RPDF dither if you are processing the signal in any way. (volume control, room correction, upsampling etc.)

 However JRiver is far ahead of everything else when it comes to library management and features such as remote control via JRemote, Theater View on a TV, managing playback to several devices at once, and non-audio features like video playback (though the latter is at a very early stage in development on OSX) which is the reason that I will continue to use it instead of anything else for the foreseeable future.

 I'm not sure about the Mac side of things, but on Windows you can at least replace their dither with a VST plugin now, which fixes that aspect of performance. If you are not upsampling, that addresses the last 5-10% difference in audio quality between JRiver and other players in my opinion.

 You're stuck with their resampling though, which is not the greatest.
 It isn't bad, but if you're going to be upsampling everything (e.g. to DSD) there are better options like Audirvana/HQ Player.
 But again, JRiver gets you 90% of the way there as far as quality is concerned, and when you consider how many other features it offers, it's still my preferred player by a long stretch.
 Personally I don't upsample music at all, only video playback, so it's not a major issue for me - though it is something I'd like to see them improve.

-----

and this

Audirvana has Direct Mode, which bypasses Core Audio. I think this makes a bigger difference than upsampling. And it benefits even with higher resolutions.

 When JRiver supports ASIO on the Mac, I think it will likely have matched Audirvana's transparency. But only with exaSound DACs for the time being as only they have released ASIO drivers on the Mac so far AFAIK.
Logged

Hilton

  • Regular Member
  • Citizen of the Universe
  • *****
  • Posts: 1291
Re: JRiver audio testing dilema
« Reply #29 on: August 21, 2015, 09:29:18 am »

PS...
I guess the real question here is, does the dither engine ultimately cause the largest audible difference in the bits in bits out argument?  Some of the articles I've just read would indicate so.

If this is the case, I wonder if any of the players had core audio disabled or if there were any up-conversions or down-conversions or resampling happening to make dithering a factor? Tom did say they were all doing pass through or bit perfect (a word he doesn't like).

Logged

glynor

  • MC Beta Team
  • Citizen of the Universe
  • *****
  • Posts: 19608
Re: JRiver audio testing dilema
« Reply #30 on: August 21, 2015, 10:36:28 am »

The dither question has been discussed ad nauseum here in the past (and has, in fact, gotten more than one user banned here before).  I don't know enough to comment intelligently but I know Matt is positive it is nonsense.

The stuff about bypassing Core Audio is absolutely nonsense, spouted by people who don't know what they're talking about.  Core Audio is the OSX audio API. It cannot be "bypassed" unless you use ASIO or re-write all of the audio drivers yourself from scratch.

Their "direct mode" is still through Core Audio, and is the same thing MC does by default on OSX.  They're just bypassing the API's DSP and effects, which MC does as well, of course, since they have their own.
Logged
"Some cultures are defined by their relationship to cheese."

Visit me on the Interweb Thingie: http://glynor.com/

Hilton

  • Regular Member
  • Citizen of the Universe
  • *****
  • Posts: 1291
Re: JRiver audio testing dilema
« Reply #31 on: August 21, 2015, 10:49:36 am »

The dither question has been discussed ad nauseum here in the past (and has, in fact, gotten more than one user banned here before).  I don't know enough to comment intelligently but I know Matt is positive it is nonsense.

The stuff about bypassing Core Audio is absolutely nonsense, spouted by people who don't know what they're talking about.  Core Audio is the OSX audio API. It cannot be "bypassed" unless you use ASIO or re-write all of the audio drivers yourself from scratch.

Their "direct mode" is still through Core Audio, and is the same thing MC does by default on OSX.  They're just bypassing the API's DSP and effects, which MC does as well, of course, since they have their own.

Thanks Glynor for the no nonsense confirmation :)  Which colour do you prefer? Black or white? LOL (ps that's a good thing.)

I know about the issues of dithering on the forum (and other forums) which is why it was the last thing to bring up.  (and carefully)

Just trying to be objective and analyse all the possible variables.

All other things being equal, with the result JRiver scored it cant of all been bias related even if there were bias.  I'm trying to pin it down to where the difference was as there was clearly an audible difference.

If Jitter is out, and the bias doesn't explain the significant difference in the voting then just maybe the Dither plays more of role than previously acknowledged.  (puts flame suit on)

I've read lots about dither in the past and again tonight and I understand it very well......  Maybe it's time to put previous opinions aside in the light of these clear test results, even if they're flawed.
If bits are bits, JRiver should have at least scored close to an equal third of the votes.

Dithering is the only thing I can think of that would be different between the players after everything we've discussed. (assuming bit-perfect and volume matched and I have to say I trust Tom on this one)

Logged

JimH

  • Administrator
  • Citizen of the Universe
  • *****
  • Posts: 71294
  • Where did I put my teeth?
Re: JRiver audio testing dilema
« Reply #32 on: August 21, 2015, 12:11:15 pm »

Hilton,
What do you do if you get a bad itch somewhere under all that armor?
Logged

glynor

  • MC Beta Team
  • Citizen of the Universe
  • *****
  • Posts: 19608
Re: JRiver audio testing dilema
« Reply #33 on: August 21, 2015, 12:48:15 pm »

Dithering is the only thing I can think of that would be different between the players after everything we've discussed. (assuming bit-perfect and volume matched and I have to say I trust Tom on this one)

I don't. If you assume that there was actually a difference that people could identify, then faulty Volume matching is the most likely answer by far. Did you have a decibel meter running throughout the show?  If not, then how do you know they were volume matched??

Other possible reasons:

* The winning player is actually not bitperfect. It'd be easy to "win" many of these contests by lying and making your player (because of an "accidental" bug, to be sure) play everything a quarter-decibel higher than you should be.

* The glitches may have actually improved Audionirvana's chances. If everything is equal otherwise, the mind will latch onto what it "remembers" and recall it fondly (the human brain doesn't like it when it is confused).  So, the glitches were singular events that separated that playback from the rest of the pack.  Making it, therefore, more memorable.  To the listeners in their memories, this would seem like "yeah, it glitched, but before and after wow it just sounded great".  More great than the others?  Who knows. It doesn't matter because that's the event their brain remembers.  The difference between the disruption of the glitch, and the otherwise premium quality sound (because, assuredly, everything else in the audio chain was very premium and low noise).

* The Infinite Monkey conundrum.  I don't know your exact methodology, but I know from your descriptions of everything that you were testing, that it was not very rigorous and there were very few (maybe only two?) tests evaluating three different players.  That is an inadequate sample size to allow you to distinguish from random noise, as I mentioned above.  Remember, if you capture screenshots of static over and over and over, you can easily get one that looks like this (and eventually will get one like this):

Logged
"Some cultures are defined by their relationship to cheese."

Visit me on the Interweb Thingie: http://glynor.com/

JimH

  • Administrator
  • Citizen of the Universe
  • *****
  • Posts: 71294
  • Where did I put my teeth?
Re: JRiver audio testing dilema
« Reply #34 on: August 21, 2015, 12:53:04 pm »

The winning player is actually not bitperfect. It'd be easy to "win" many of these contests by lying and making your player (because of an "accidental" bug, to be sure) play everything a quarter-decibel higher than you should be.
Bingo.  I feel the same about the dithering question.
Logged

glynor

  • MC Beta Team
  • Citizen of the Universe
  • *****
  • Posts: 19608
Re: JRiver audio testing dilema
« Reply #35 on: August 21, 2015, 01:31:09 pm »

If bits are bits, JRiver should have at least scored close to an equal third of the votes.

I addressed this above, but this statement is absolutely incorrect.

You'd only expect "equal" results once you've done a statistically valid sample.  For a single test, or only a couple, you can't draw any conclusion from the data point.

With the Infinite Monkey conundrum I mentioned above, in any single test, there is an equal chance that the first result will be Shakespeare. A tiny chance, but an equal one to all other possible results. If you did this test, and the first monkey typed out a sonnet and then you stopped, it would seem like an amazing result. Of course, it would never happen again.

It seems counter intuitive to people who don't understand statistics, but noise is the expected result of a single test, not "equal results".  Just like if you roll a single die, you are not more likely to roll a 3 than any other number.
Logged
"Some cultures are defined by their relationship to cheese."

Visit me on the Interweb Thingie: http://glynor.com/

jmone

  • Administrator
  • Citizen of the Universe
  • *****
  • Posts: 14254
  • I won! I won!
Re: JRiver audio testing dilema
« Reply #36 on: August 21, 2015, 04:05:07 pm »

I read the DAR article.  I wish these "shootouts" would use an ABX testing methodology.  Say there was a difference observed.  The next stage would be the investigation of "why" did MC sound different to the other two players?  If identified can some knob tweaking change it?  Then retest. 

Without a cycle like above you are just going to be clutching at straws, and until then:
Logged
JRiver CEO Elect

blgentry

  • Regular Member
  • Citizen of the Universe
  • *****
  • Posts: 8009
Re: JRiver audio testing dilema
« Reply #37 on: August 21, 2015, 05:32:27 pm »

I've been an audio guy since I was a teenager.  I've listened to a lot of systems in my time and heard lots of strange claims about different things that "help" the sound.  Lots of these trip my BS meter because I don't believe in magic.  I won't say something doesn't exist because I don't understand it.  I'll say that I *want* to understand why things work and if I can't, or if there is NO explanation, then I'm suspicious.

The audiophile circles are full of double talk, fake science, and people that don't understand science or engineering explaining how things "should be working".  I'm extremely skeptical about differences between players, but I haven't personally done an AB test.

Here's what gets me.  This is a big deal, so pay attention:

Data is being sent to a DAC by an audio player.  If the data sent is the same from two different players, playing the same song, then the audio coming out of the DAC should be the same right?  Why don't we record the data going IN TO THE DAC?  If "bits are bits" then we should be able to record those bits and compare player A to player B via the bits they produced TO THE DAC.  Right?

If these recorded DAC streams are the same, then the jitter discussion has merit and can continue.  But I have a different feeling.  I think the bits getting to the DAC are probably different.  I *suspect*, with zero proof, that some players are changing something.  Subtle eq or volume changes.  Something different.

Or this is all just more audiophile snake oil driven by the *desire* to hear differences where none exist.  It's really hard to say without testing.

Brian.
Logged

astromo

  • MC Beta Team
  • Citizen of the Universe
  • *****
  • Posts: 2239
Re: JRiver audio testing dilema
« Reply #38 on: August 21, 2015, 07:31:12 pm »

All of this takes me back to when I was 1st year grad and finally in possession of a bit of moolah (..and kudos to you Hilton for having a crack - nice work).

I trekked all over town listening to various systems in Hi-Fi stores with a few CDs of my preferred music (for some reason they all seemed to like playing "The Race" by Yello   :P ). The sales pitches from some were enough to make me puke.

Anyhoo, I made my decision and got my gear home:
  • NAD 3240 PE Amp
  • Jamo CL30 Speakers
  • NAD CD Player of the era (memory has faded the model number from my mind

It's all I could afford and it ended up lasting me the best part of 20 years. I had the speakers reco'd down the track but they were pretty much chopped up at the end. The amp had a loose connection or 5 in there somewhere, so there was some scratchy playback going on in one or the other channel. I ended up "donating" them to a kerbside collection and it all got whisked away before I could blink.

The moral of the story and getting to the point, I realised all the way back then that once I got my gear away from the funky listening studios and the comparisons and the sales pitches, I could just sit back and enjoy the music.

Isn't that what it should all be about? If it sounds good to you, then soak it up and enjoy it. If it doesn't, then do something different. I find the idea of comparisons interesting but I always apply my sense check and invariably end up thinking, "hmmm, interesting".
Logged
MC31, Win10 x64, HD-Plex H5 Gen2 Case, HD-Plex 400W Hi-Fi DC-ATX / AC-DC PSU, Gigabyte Z370 ULTRA Gaming 2.0 MoBo, Intel Core i7 8700 CPU, 4x8GB GSkill DDR4 RAM, Schiit Modi Multibit DAC, Freya Pre, Nelson Pass Aleph J DIY Clone, Ascension Timberwolf 8893BSRTL Speakers, BJC 5T00UP cables, DVB-T Tuner HDHR5-4DT

mwillems

  • MC Beta Team
  • Citizen of the Universe
  • *****
  • Posts: 5168
  • "Linux Merit Badge" Recipient
Re: JRiver audio testing dilema
« Reply #39 on: August 21, 2015, 09:12:53 pm »

Let me offer an embarassing personal anecdote to explain my point of view about listening tests and the fallability of the ear:

Several years ago I built a pair of home-made bi-amped speakers.  They're each the size of a large washing machine and they took me the better part of a year to build (more than a month of Sundays).  Because they were entirely home-made and I was trying to do an active crossover from scratch, even after they were structurally complete, they still required quite a bit of tweaking to get the crossovers dialed in and the EQ set.  

So I started by just dialing in the EQ that seemed to make sense based on the specifications of the drivers, and taking a couple of quick RTAs with pink noise.  That sounded alright, and all of my friends dutifully told me how great it sounded.  I kept getting headaches whenever listening to the speakers though, and the headaches would go away right after I turned them off.  So I tried tweaking some frequencies, and I'd think I'd made some progress (it sounded better!), and everyone who heard it thought the new EQ sounded better.  Eventually, I even started dutifully "blindly" A/Bing the new EQ with the original (I'd switch between them during playback without telling my guests what I was switching, which isn't blind at all), and my guests would invariably swear the new EQ sounded better.  And I kept going down this "tuning by ear" method, often reversing previous decisions, backing and forthing and adding more and more convoluted filters.  

The most embarassing moment (and something of a turning point) was when I was A/Bing a filter, and a friend and I were convinced we were on to something excellent.  After ten minutes of this, we realized that the filter bank as a whole (PEQ2) was disabled  :-[. I had been toggling the individual filter, but it wasn't actually even affecting playback.  And we had been convinced we heard a difference.  And the headaches never went away.

Eventually the headaches (and a growing skepticism) prompted me to stop screwing around and take some real logsweep measurements (which were then a relatively new thing for me), and I realized that there was apparently a huge (10+dB) semi-ultrasonic resonant peak at 18.5KHz that I couldn't even actually hear. So I fixed it. And then my headaches went away.  

And then I took an agonizing look at the rest of the measurement and noticed that my "tuning by ear" which I (and my friends) all felt was clearly superior had turned the frequency response into a staggering sawtooth.  So I systematically removed the EQ that was pushing things away from "flat," and kept the EQ that contributed to flatness.  The result sounded so different, and so much more natural that I was embarassed to have wasted months screwing around trying to use my "golden ears" to tune my speakers.  And my wife (who had been encouraging, but politely non-commital about my EQ adventure) came home and asked unprompted if I had done something different with the speakers, and said they sounded much better.  And she was right; they did. In a few afternoons, I had done more to move things forward than I had in months of paddling around.

The point of this anecdote is not to try and "prove" that my measurement derived EQ "sounded better" than my ear-derived EQ or that a flat frequency response will sound best [as it happens, I ultimately prefered a frequency slope that isn't perfectly flat, but I couldn't even get that far by ear].

The point is that taking actual measurements had allowed me to:
1) Cure my ultrasonic frequency induced headaches;
2) Improve the fidelity of my system (in the literal sense of audio fidelity as "faithfulness to the source"); and
3) Ultimately find the EQ curve that I liked best.

My ears (and the inadvertantly biased ears of my friends) did not allow me to do any of those things, and in fact led me far astray on 2).  My ears couldn't even really get me to 3) because I kept reversing myself and getting tangled up in incremental changes.  My ears were not even reliably capable of detecting no change if I thought there was a change to be heard.  

Once I realized all this, it was still surprisingly hard to admit that I had been fooling myself and that I was so easily fooled!  So I have sympathy for other people who don't want to believe that their own ears are equally unreliable, and I understand why folks get mad at any suggestion that their perception may be fallible.  I've been accused by many indignant audiophiles of having a tin ear, and if I could only hear what they hear, then I'd be immediately persuaded.  But my problem is not that I am unpersuaded that there's a difference: it's that I'm too easily persuaded!  I'll concede, of course, that it's possible that I do have tin ears and other people's ears are more reliable than mine, but the literature concerning the placebo effect, expectation bias, and confirmation bias in scientific studies suggests that I'm not so very alone.

I've seen the exact same phenomenon played out with other people (often very bright people with very good ears) enough times that I find it embarassing to watch sighted listening tests of any kind because they are so rarely conducted in a way designed to produce any meaningful information and lead into dark serpentines of false information and conclusions.

---------------------------------------------------------------

So to bring things back around:  if some bitperfect audio players have devised a way to improve their sound they have presumably done so through careful testing, in which case they should be able to provide measurements (whether distortion measurements on the output, digital loopback measurements, measurements of the data stream going to the DAC, or something) that validates that claim.  If they claim that their output "sounds better" but does not actually measure better using current standards of measurement, they should be able to at least articulate a hypothetical test that would show their superiority.  If they claim that the advantage isn't measurable, or that you should "just trust your ears" than they are either fooling themselves or you.

In a well-established field of engineering in which a great deal of research and development has been done, and in which there is a mature, thriving commercial market, one generally does not stumble blindly into mysterious gains in performance. Once upon a time you could discover penicillin by accident, or build an automobile engine at home.  But you do not get to the moon, cure cancer, or improve a modern car's fuel efficiency by inexplicable accident. In an era where cheap-o motherboard DACs have better SNR's than the best studio equipment from 30 years ago, you don't improve audio performance by inexplicable accident either.  If someone has engineered a "better than bit perfect" player they should be able to prove it, as they likely did their own testing as part of the design process.  If they can't rigorously explain why (or haven't measured their own product), let them at least explain what they have done in a way that is susceptible of proof and repetition.  Otherwise what they are selling is not penicillin, it's patent medicine.

Bottom line: if you and a group of other people hear a difference, there may be a difference, but there may not be.  Measurements are the way to find out if there is really a difference. Once you've actually established that there is a real, measurable difference, only then does it make sense to do a properly conducted listening test to determine if that difference is audible.  Otherwise you're just eating random mold to find out if it will help your cough.

If you want to get to the bottom of it, I'd ask your host if he'd be willing to let you take measurements of the different players with a calibrated microphone, an SPL meter, and some test clips of your own choosing.  He'll probably be game, and you'll likely be able to get some really useful information about whether the players sound different and, if so, why.

Or you could just relax and enjoy the music  ;)
Logged

Awesome Donkey

  • Administrator
  • Citizen of the Universe
  • *****
  • Posts: 7355
  • The color of Spring...
Re: JRiver audio testing dilema
« Reply #40 on: August 21, 2015, 11:08:33 pm »

Or you could just relax and enjoy the music  ;)

This. Why drive yourself crazy diving into all this stuff? Just enjoy the music!
Logged
I don't work for JRiver... I help keep the forums safe from Viagra and other sources of sketchy pharmaceuticals.

Windows 11 2023 Update (23H2) 64-bit + Ubuntu 23.10 Mantic Minotaur 64-bit | Windows 11 2023 Update (23H2) 64-bit (Intel N305 Fanless NUC 16GB RAM/256GB NVMe SSD)
JRiver Media Center 32 (Windows + Linux) | Topping D50s DAC

JimH

  • Administrator
  • Citizen of the Universe
  • *****
  • Posts: 71294
  • Where did I put my teeth?
Re: JRiver audio testing dilema
« Reply #41 on: August 22, 2015, 01:31:57 am »

mwillems,
Amazing story, very well written.  It should be published in an audiophile magazine.  Not that it would make any difference...

Thank you.

Jim
Logged

KingSparta

  • MC Beta Team
  • Citizen of the Universe
  • *****
  • Posts: 20048
Re: JRiver audio testing dilema
« Reply #42 on: August 22, 2015, 04:42:56 pm »

I also built a speaker box in 9th grade shop and put it on my headboard of my bed

when sleeping one night it fell off the headboard and hit me square in the forehead where I have a small scar to this day.

it is right next to the scar when I drank a 1\2 gallon of Seagram's Seven, and 1\5th of Smirnoff vodka.

I stopped drinking, and I stopped making speaker boxes.
Logged
Retired Military, Airborne, Air Assault, And Flight Wings.
Model Trains, Internet, Ham Radio
https://MyAAGrapevines.com
Fayetteville, NC, USA

JimH

  • Administrator
  • Citizen of the Universe
  • *****
  • Posts: 71294
  • Where did I put my teeth?
Re: JRiver audio testing dilema
« Reply #43 on: August 23, 2015, 11:30:53 am »

Story amazing written publish somewhere at a time.
Logged

JustinChase

  • MC Beta Team
  • Citizen of the Universe
  • *****
  • Posts: 3273
  • Getting older every day
Re: JRiver audio testing dilema
« Reply #44 on: August 23, 2015, 05:25:35 pm »

I've seen so many people saying there are no credible test done to see if there are actually any differences in 'bit perfect' playback.

There are so many intelligent and knowledgeable folks in this forum, who seem to have the 'right' equipment to do such testing, I wonder why no one has designed a test to definitively answer this question.

Perhaps some of the folks around can/should discuss how to perform such testing and maybe an answer can be reached which is repeatable and definitive.

I don't know if the other 2 audio players in the test are free, of if someone has all 3 players, but perhaps they can be used in said testing.

Just a thought  ::)
Logged
pretend this is something funny

glynor

  • MC Beta Team
  • Citizen of the Universe
  • *****
  • Posts: 19608
Re: JRiver audio testing dilema
« Reply #45 on: August 23, 2015, 06:13:03 pm »

Tests have been done. Over and over.

Proof does not dissuade the deluded.
Logged
"Some cultures are defined by their relationship to cheese."

Visit me on the Interweb Thingie: http://glynor.com/

blgentry

  • Regular Member
  • Citizen of the Universe
  • *****
  • Posts: 8009
Re: JRiver audio testing dilema
« Reply #46 on: August 23, 2015, 06:29:54 pm »

I'd be interested in references to tests comparing "bit perfect" players.

Brian.
Logged

KingSparta

  • MC Beta Team
  • Citizen of the Universe
  • *****
  • Posts: 20048
Re: JRiver audio testing dilema
« Reply #47 on: August 23, 2015, 08:08:28 pm »

Sorry to say, It's a true story.
Logged
Retired Military, Airborne, Air Assault, And Flight Wings.
Model Trains, Internet, Ham Radio
https://MyAAGrapevines.com
Fayetteville, NC, USA

Hilton

  • Regular Member
  • Citizen of the Universe
  • *****
  • Posts: 1291
Re: JRiver audio testing dilema
« Reply #48 on: August 23, 2015, 08:44:02 pm »

I've seen so many people saying there are no credible test done to see if there are actually any differences in 'bit perfect' playback.

There are so many intelligent and knowledgeable folks in this forum, who seem to have the 'right' equipment to do such testing, I wonder why no one has designed a test to definitively answer this question.

Perhaps some of the folks around can/should discuss how to perform such testing and maybe an answer can be reached which is repeatable and definitive.

I don't know if the other 2 audio players in the test are free, of if someone has all 3 players, but perhaps they can be used in said testing.

Just a thought  ::)

I'm tempted to do it. I have all the measurement gear, software and microphones to do it properly.
I can get OSX running on the NUC without too much effort too.

Only problem with my system is that it might not be good enough gear or setup to resolve the detail to pick if there are any differences, and my room isn't optimal for trying to do blind tests.
It might be a fun experiment to run though and see if any differences can be measured.
Logged

astromo

  • MC Beta Team
  • Citizen of the Universe
  • *****
  • Posts: 2239
Re: JRiver audio testing dilema
« Reply #49 on: August 23, 2015, 09:29:40 pm »

I'm tempted to do it. I have all the measurement gear, software and microphones to do it properly.
I can get OSX running on the NUC without too much effort too.

Only problem with my system is that it might not be good enough gear or setup to resolve the detail to pick if there are any differences, and my room isn't optimal for trying to do blind tests.
It might be a fun experiment to run though and see if any differences can be measured.

And if you're going to do these things "properly", wouldn't you really need a room like this?
http://www.acoustics.salford.ac.uk/facilities/?content=listening
Quote
Tests & Standards
This room is one of few test rooms in the UK which can be used to meet the requirements of ITU-R BS 1116-1 for subjective assessments of small impairments in audio systems. This room can also be used to carry out listening tests on loudspeakers to BS 6840-13 / IEC 268-13.

I'd think that the room that you do your test in would be one of the biggest points of variability that a skeptic could point at to say, "not valid". A tough ask to find the right space.
Logged
MC31, Win10 x64, HD-Plex H5 Gen2 Case, HD-Plex 400W Hi-Fi DC-ATX / AC-DC PSU, Gigabyte Z370 ULTRA Gaming 2.0 MoBo, Intel Core i7 8700 CPU, 4x8GB GSkill DDR4 RAM, Schiit Modi Multibit DAC, Freya Pre, Nelson Pass Aleph J DIY Clone, Ascension Timberwolf 8893BSRTL Speakers, BJC 5T00UP cables, DVB-T Tuner HDHR5-4DT
Pages: [1] 2   Go Up