Did you use the bitdepth DSP in JRiver when creating the files? Audacity uses 32 bit float when performing any changes to the audio. It will then apply its own dither to the final output when reducing back to the file's native bitdepth. Amplifying to maximum volume in Audacity invalidates the test, IMO.
I applied a -50dB volume adjustment via the Parametric EQ DSP.
Bit-depth was set to 16-bit in the conversion options. (not the bit-depth simulator DSP)
In iZotope RX3 I simply applied a gain of -50dB, and exported as a 16-bit file using TPDF dither.
I chose a -50dB adjustment since it pushes the audio from this track into the lower ~6-bits of a 16-bit file. (the peaks were approximately -10dB)
Both files were then opened in Audacity and had ~60dB gain applied.
Audacity's dither is not relevant here, since dither only applies to the lower bits, and after amplification the audio is now using the highest bits.
I could do the amplification using iZotope instead, but chose Audacity to keep things impartial.
With that said, sample 2 sounded better to me.
Listen to the full samples. There is obvious distortion and noise modulation in Sample 2.
Headphones may help.
I then analyzed the file in JRiver. Sample 2 is .3 dB quieter for Volume Level R128. Also the Dynamic Range (R128) is .9 dB higher than Sample 1. I think this means the dither noise has been pushed lower in Sample 2.
Sample 1 should have ~3dB more noise than Sample 2 if Media Center is using 1LSB and iZotope is using 2LSB.
The volume differences are my fault for letting Audacity just maximize the gain without clipping.
It's really not relevant though, because we are not comparing noise levels. We are listening for distortion.
I could recreate the files using exactly +60dB for both, but it wouldn't make a difference.
When the dither noise is at around -190 dB per Bob Katz's measurements, I'm not sure the difference in how it might sound when amplified has any bearing on real life.
Well it must be 60dB down (rather than 190dB) since I amplified the signal by ~60dB.
For what it's worth, I did suspect that there was some noise modulation when listening to certain tracks, though I was concerned that something else might have been the cause, and this just confirms it.
And yes, 24-bit should be a
lot better but I have some devices which only handle 16-bit audio and do use quite a large volume reduction via Media Center - not quite 60dB, but maybe 40dB or so. I know it's not ideal, but that's just how things have to be.
I took a file, reduced it by 50 dB and dithered to 16 bit using JRiver's DSP. I then played the file back with +50 dB in DSP Studio. There was no audible noise when played back at normal listening levels.
Again, be sure that you are converting to a 16-bit file. The track that you use may matter as well, it just happened to be rather obvious with solo piano like this.
It's unlikely to be an issue whatsoever with 24-bit playback, since a loss of ~10-bits (60dB) still leaves you with 14 rather than 6.
That said, DAC chip manufacturers like ESS have claimed that noise floor modulation
is something that we can detect, even at levels which should be "inaudible" below the music.
The same argument is used with DSD vs PCM, since DSD has a variable noise floor and PCM
should have a flat noise floor.