I'm not sure a single test of a single interface tells us a whole lot, but even assuming that their experimental data is representative and valid, I'm still not sure how well their data supports their hypothesis.
It's true that their graph shows a pretty sharp decrease in phase noise at 1Hz after warm up. The issue with that is that no one (not even mojave who has a system that will play flat down to 7Hz) does any listening at 1Hz. The lower limit of human hearing is around 20Hz and redbook CDs don't generally even include sound below 20Hz for that reason. In lab conditions, humans have successfully detected sounds down to 10 or 12Hz, but that's really the absolute limit in very artificial conditions. In practice, sounds below 20Hz tend to be felt instead of heard (the "bass hit" in our chests, etc.). For my part I can't actually even fully hear a 20Hz tone, 25Hz is fine, but I feel a 20Hz tone about as much or more than I hear it, and below that, forget it, I hear nothing but my china rattling even when it's very loud.
So even assuming the validity of their tests, if you look at the portion of the graph at 20Hz and above you'll notice two things: the noise is at -100dB or lower for all categories (which should be very hard to hear), and the four measurements aren't really very far apart at all. The 1 hour measurement is the outlier, and the 15 minute measurement is indistinguishable from the 24 hour measurement. The cold measurement's jitter decreases very sharply right at 20Hz. And that's not really that surprising given that DACs are engineered with the audible band as their primary focus; you'd expect to see the best behavior between 20Hz and 20KHz.
Experts disagree on the audibility of phase distortion/jitter in general, but based on my own experiments with phase manipulation, I'm inclined to think it is at least potentially audible in the audio band if it's severe/loud enough. What I'm skeptical of is the detectability or relevance of sub-sonic jitter. Given that sound is not generally audible below 20Hz (and completely inaudible well before we get to 1Hz), for these results to be meaningful, someone would have to be able to "feel" jitter to detect that the fidelity of the sub-sonic rumbling in their home theater is distorted. I've read arguments that sub-sonic jitter can (through intermod) inject itself into the audible band, but I'd need to see evidence of that, there's no evidence in these graphs.
So color me skeptical, but I'm open to evaluation of additional supporting data.