I'm going to have to look into more details on volume leveling. I've been using two options from DSP Studio: Volume Leveling, and Adaptive Volume, with Adaptive set to Peak Level Normalize. I thought that preserved EVERY BIT OF DYNAMICS in every file. If it is impacting dynamics even by a small amount, I want to disable the offending option. All I'm looking for is a sort of average volume level across my listening session. I want no alteration of dynamics.
Brian.
Neither Volume leveling nor peak level normalize perform
dynamic range compression. Some of the other adaptive volume settings do perform dynamic range compression.
However, any reduction in digital volume at all from any source will result in some theoretical loss of dynamic range at the bottom, not through compression, but through that info effectively "falling off the bottom" and getting dithered.
But given that 16-bit music offers 96dB of effective resolution, and most music has an effective dynamic range well less than 20dB, I wouldn't sweat the small losses from volume attenuation as you're not likely to lose any actual information even with very significant attenuation. Additionally, if your DAC supports 24-bit output and you're listening to 16-bit files (i.e. CD audio), JRiver automagically pads the bitdepth, so you're not even theoretically losing any information until you've done 48dB of attenuation in that scenario.
And even assuming there was information down there to lose (which there isn't), unless you're listening at volumes louder than 100dB (unlikely for regular listening in a home setup), you wouldn't be able to hear the info even in laboratory conditions. Given the fact that the noise floor in a quiet home during the day is typically between 35 and 45dB, and even loud home listening usually doesn't get much above 90dB, it's unlikely you'd even be able to hear any theoretical information loss until you'd attenuated the volume by quite a bit.
So, I wouldn't sweat losing a few dB to volume levelling/peak level normalize.
Is is maybe better to not max out the processor and leave some of the cores (or at least virtual cores) for such unexpected events. With this in mind how many files should be simultaneously analyzed?
Using all the virtual processors will actually make it go slower in my experience. If you want speed and don't care about using the computer int he meanwhile, my experience has been that analyzing the same number of files as actual cores (4) produces the fastest outcome on my i7's, with doing one more than the number of cores (5) a close second (might be faster depending on the day). Six and up seemed to just slow things down.