Not wishing to offend anyone but many maintain that digital audio transmission is essentially an analogue stream of electrons, voltages turning on or off in line with the binary code....and with potential for degradation in timing by eg electrical interference.
I'll bite in danger of ridiculing myself or opening a can of worms
.
Bits can be measured. First of all, if bits can flip on a USB cable due to interference, someone would have measured it. Sent bits need to match received bits. It's as simple as that and there is nothing magical about it. Bits don't colour green or purple to affect their state and USB data transfers would corrupt if what 'many' suggest is true. If this happens on a digital audio signal, you would hear static ticks or if enough sequencial bits would be scrambled it would turn into garbled noise. Randomly flipped bits will NEVER EVER reduce imaging, muddle bass or reduce highs. Bits don't get shuffled around on a cable either, it simply doesn't work like that.
Maybe this will help. I'll try to explain how I think this works. And I'm probably wrong on some or all of the technical bits here (pun intended), but I believe the ghist of it is correct. Let's asume you're listening to music and you believe you hear reduced imaging after swapping a USB cable over which digital audio is transferred. Let's establish what needs to happen for that to occur. Imaging exist due to slight delays and slightly louder/quieter on left/right channels in particular frequency ranges that make up an instrument. Some kind of interference would have to flip a number of bits in every sample
on both channels and in the correct frequency ranges that make up said instrument in such a way that reduced imaging would be audible, while keeping the integrity of the audio signal (ie, not corrupting it). This would be a very advanced kind of DSP
. The fluctuations of this interference would also have to match the clock of the signal, otherwise the interference wouldn't be affecting the correct bits in each sample, it would gradually shift. If you understand how digital audio works, you'd understand that reducing the amplitude of a digital signal *cannot* be accomplished at random. Same with delaying a signal for a few milliseconds in order to affect imaging cannot happen at random due to "some kind of interference". If a signal is delayed at all, the entire signal would be delayed and not the frequencies of some instrument to affect placement. Really, in order for interference to accomplish all that, it would have to come from an alien device. No known interference on earth will accomplish that. Even if it were possible for interference to affect a particular frequency range in a digital signal, it still wouldn't affect imaging of a single instrument or affect soundstage in its entirety. Im my belief, this is simply impossible.
Interference simply makes a nice smooth signal appear jagged. I believe that is
all that it does. But as long as the receiving end interprets it as being a voltage peak high enough for it to be a bit, it just received a bit. No more, no less. Not a small bit or a loud bit. A bit is a bit. If it doesn't receive a peak voltage, it didn't receive a bit. How simple can it be? So the question really needs to be, will interference dampen a voltage peak to a point where it is no longer interpreted as a bit, or can it create voltage peaks so that the receiving end believes its receiving bits when it shouldn't? The answer IMHO is no. Maybe in a laboratory with artificial interference created for the sole purpose of creating such an effect, but in a living room? Not a change.
I'm happy to be proven wrong but until then, I believe, with all due respect that people claiming things like you suggest simply do not understand how it works. I believe these people take the effects of interference on an analogue signal and apply it 1:1 to a digital signal. To make that work, they reason the voltage signal is actually an analog signal as well so their theories apply. In part this is true, interference does apply but it doesn't have the effect they claim.
In this respect I wondered whether different buses eg usb or forms of data transmission eg spdif may infer some advantage.
My area of expertise is more what happens beyond the human ear drum so I rely on computer science experts for 'bits' preceding this :-)
USB is superior to optical or coax. Not only are optical and coax limited in their supported formats (usually 24-bit @ 96 or 192kHz max, whereas USB often supports 32-bit @ 384 kHz max), but they are also synchronous transfers, the sender determines the clock whereas USB is asynchronous and timing is of less importantance because the signal is buffered first and decoded from there on an internal clock. Whether you'd hear it is another matter entirely
.