I tried this EncSpot once. Well... nice bar graphs, plenty of numeric figures, and so on... but frankly, do many people understand how those figures are related to "sound quality" ? Personally, I don't (or rather, I never tried, though I know a bit about "wavelets") !
I used to work in an image compression algorithm's development lab. From what I recall, a genuine estimation of encoding quality requires a comparison with the original uncompressed data. And then comes the question of how you compare. Quadratic error in the temporal domain ? Doesn't work, since most sound compression algorithms alter the phase of the signal... Quadratic error in the frequency domain ? Rude... Psyschoacoustic model ? We haven't managed so far to know exactly how the ear works...
And speaking of "wavelet" based compression algorithm and high-frequency factors reduction: OK, average people can't hear anything above 16-18kHz... but experiments tend to prove that though we can't hear frequencies above, our sensibility to transients IS altered when supressing those frequencies. Classical instruments (such as violin, harpsichord, ...) sound's signature depends a lot on those transients. So what are very good encoding settings for some modern electro music can actually be absolut horrible settings for classical music...
My electro-acoustic's teacher used to say: "How does it SOUND to YOU ?!?" (this guy was no "greenhorn", he designed one of Switzerland's best-sounding symphony hall...). Personally, I ain't got a good ear... thus I never managed to hear differences on files encoded with higher settings than LAME's "--alt-preset standard". Happy me, I guess...
ERRATA:
I mentionned "wavelets" in this post. Well, actually, I should have spoken of "subband coding" :-)