Thanks for your thoughts, guys. I should have been more careful with my hurried quotes. I elided a section of that post dealing with ASIO as an interface to "sound cards" (bigger aspect of its influence on SQ) because that includes many devices irrelevant to OS X and I didn't want to get into a sideband about cards. The point is that ASIO bypasses Core Audio and interfaces directly with the hardware (so, for instance, no OS-based volume control).
As I said, I'm no expert in all the factors influencing the extraction of the best facsimile of the recorded event via a digital -> analog recording and playback chain. I do know enough to realize that DoP does not manipulate the values in the sampled data stream. But packing and unpacking does create work (processing overhead) for the sending and receiving device. Now by what mechanisms might that affect ultimate SQ? And are there other things about async USB from Core Audio's vs. ASIO that might affect SQ? I suspect timing effects ("jitter") and various flavors of noise.
I can't see why jitter wouldn't affect DSD. Changing the threshold slope between values or the regularity of sample intervals should affect the playback fidelity of any sampled signal. DSD has a much higher sampling rate than even hi-res PCM so I'd expect it to be sensitive to smaller errors in timing. Each error might have a smaller effect on SQ and the overall profile of timing effects is no doubt different than for PCM, but it's not zero.
So much for principles. Even as a trained and experienced listener, it's very difficult for me to attribute improvements in SQ to improvement in signal timing vs. lowered noise from other sources (better rejection of server-related noise into analog circuitry in the DAC, better power filtering in the DAC, better signal handling in the DAC chipset implementation, better filter algorithms, better analog output stage design, etc.). What I can say is that every generation of digital playback, given extremely high-quality analog output stages, has yielded improvements in SQ, particularly in what's sometimes called digital "smear." That includes successive DACS using the same Sabre DAC chipset. I'd expect that advances in timing management--including minimizing sources of induced timing errors in the DAC device caused by sources other than the "inherent" jitter in the incoming data stream itself--and possibly filter construction, to account for much of the perceived progress, given that analog audio circuit design is already a pretty mature area.
But back to ASIO. Some other players for Mac (e.g., Audirvana Plus) also bypass Core Audio. No doubt they have their own reasons. G. Klissarov's general statement:
The Core Audio sound system used on Mac OS X is much better compared to the Windows sound system, but it has the same limitations when it comes to DSD. Both Windows and Mac need a workaround like DSD over PCM (DoP) for DSD support.
In my opinion asynchronous USB Audio Class 2.0 implementations with Core Audio are inferior compared to ASIO.
Here is his more detailed explanation as reported in Steve Plaskin's AudioStream review of the e22 (9/4/2014):
Computer sound systems are made and optimized for two purposes:
To be user friendly.
To be compatible with everything and everybody, to support a wide range of audio applications and hardware, from Skype to portable players and games.
The need for convenience and versatility and the ever increasing demand for new feature cause great complexity. Sound quality is a secondary concern and it is often compromised.
Basically the sound system and the drivers have to deliver performance in several ways:
The sound stream data have to be delivered from the player software to the DAC chip without errors or unwanted changes. This is called bit-perfect operation.
The precision of playback timing is of huge importance. Each data sample must appear on the input of the DAC chip at the right time, no sooner and no later. Computer clocks can run at the wrong speed and also can be jittery. The best solution to the inconsistent computer timing is the use of asynchronous protocol for exchange of data between the computer and the DAC. Asynchronous means that the DAC can ask the computer for more data when it is needed, instead of being told by the computer the timing of the data beat. Asynchronous operation makes the DAC the master device. Since the cheapest DAC has a better master clock than the most expensive personal computer, it makes sense to put the DAC in charge of the playback timing.
Cutting edge high resolution recordings require support for high DSD and PCM sampling rates. These high rates can test the limits of CPU and USB performance and require driver and sound system efficiency.
[snip "Issues with the Windows Sound System"]
Core Audio Limitations
The Core Audio sound system used on Mac OS X is much better compared to the Windows sound system, but it has the same limitations when it comes to DSD. Both Windows and Mac need a workaround like DSD over PCM (DoP) for DSD support.
DoP creates a significant performance overhead. A DoP marker byte is required for every two bytes of data. This causes 33% overhead for 24bit drivers. For 32bit drivers like ours the overhead can be 50%.
It is quite difficult to implement support for DSD256 using the DoP standard. The DoP implementation of DSD 256 requires support for PCM at 705.6kHz and 768kHz. Such sampling rates are a real challenge for both computer CPUs and USB audio interfaces.
Using the Core Audio system with DoP256 is challenging. The CPU performance requirements for processing 768 kHz sampling rate for DSD256, the 30% to 50% bandwidth overhead of the DoP format, and the need to support 8 channels for some of our DACs test the performance limits of the current software and hardware.
Another issue with Core Audio is the inconsistent support for integer mode. All DAC chips use integer data and integer mode is a way to achieve bit-perfect streaming. Unfortunately integer mode is not available on OS X Lion and Mountain Lion. It was reintroduced on OS X Mavericks, but it didn't work with all drivers. There are two Core Audio driver architectures - Kernel Space and User Space. Most older drivers are Kernel Space and integer mode on Mavericks works only for User Space drivers.
There is another architectural limitation of the Core Audio sound system. With Core Audio the Mac is always the master device, and the DAC is the slave device. The playback timing accuracy is influenced by the computer timing. In my opinion asynchronous USB Audio Class 2.0 implementations with Core Audio are inferior compared to ASIO.
ASIO Benefits
ASIO solves all these issues. ASIO is an audio steaming protocol used in recording studio environments.
Unlike the computer sound systems ASIO is designed for one purpose - sound fidelity.
ASIO is light-weight, bit-perfect, allows for automatic sampling rate switching.
ASIO has native DSD support.
With ASIO the DAC is the master, and the computer is the slave when it comes to controlling playback timing. The timing accuracy of playback can be as good as the DACs master clock.
ASIO offers better implementation of asynchronous operation than Core Audio and the Windows sound system.
All these factors contribute to the superior sonic fidelity experienced with the exaSound ASIO drivers.
I have not yet seen a report of any player software sounding better using Core Audio than using ASIO. I've seen many suggesting the reverse, including Steve's impressions in the
above review and those of
ted_b at Computer Audiophile using HQPlayer.
So I'm happy for the experienced digital designers and developers on this forum to bow-hunt angels on pinheads with George. From the perspective of a reasonably technically literate consumer, however, it appears that reasonable rationales have been put forward on the theoretical side, and several supporting listener reports on the empirical side. Lacking the expertise to categorically prove or refute them, one could summarize my position as:
The designer of my DAC believes it sounds and operates better with ASIO than with Core Audio, as do many listeners. Therefore I have an incentive to see for myself.
NOTE: I still experience dropouts from time to time in MC. My recollection of the relevant threads suggests that the developers attributed this to some interaction with the OS. If ASIO both reduces data overhead AND bypasses aspects of the OS, it seems worth exploring whether ASIO might reduce or eliminate these dropouts.