A few quick comments, based on my spending many years as an audio/radio engineer in recording studios and broadcast stations (certified Senior Engineer by Society of Broadcast Engineers)...
For most musical instruments, and even some voices, there's no way to accurately record and reproduce their true frequency response. Microphones are not FLAT, so the fidelity problem begins right there.
And, neither microphones nor audio chains nor recording media can fully handle a "live" dynamic range. In fact, a piano is the hardest instrument to record/playback accurately -- evidenced by its original name which included the phrase "piano e forte", Italian for "soft and loud". The power put out by banging ten keys of a concert grand piano while pressing the sustain pedal exceeds any mike+recorder+medium+amp+speaker system. There are amplitude (loudness) limitations with any media, whether it's tape or vinyl analog or digital.
If a recording setup is configured to try and handle a loud piano, the quietest sounds recorded at the same time usually would be lost in noise. The media noise problem is much-improved in the digital world, but there is still noise in any electronic system due (literally) to molecular vibration, plus ambient air movement, media imperfections, and other stuff including cosmic rays penetrating electronics and generating noise. It's a problem that is better than before but still a factor to be overcome.
So, among many variables, the recording system can't capture the true live performance, and the various media and formats can't store it perfectly.
Then maybe the biggest factor, the user's playback system almost never can get close to reproducing what was captured -- which isn't even the same as the original performance anyway. The ONE recording must be reasonably listenable in environments varying from quite noisy vehicles to quite quiet rooms. There's almost always ambient noise, plus non-linear speakers/headphones, plus very uneven frequencies reaching the ear due to reflection and absorption. And don't even start contemplating the bumpy frequency response of the human ear -- each human ear, because we all hear differently.
Result: Whatever we *think* we are hearing is different -- sometimes very different -- from what we might have heard standing in the recording studio. Which isn't a good or bad thing, just a fact.
Recording and broadcast engineers have long dealt with this by using frequency equalization and level compression to compensate for limitations all along the process. Both processes are subjective -- some would say arbitrary -- but we are so used to this sound, and it's so needed in most situations, that music without processing sounds almost horrible. For real fun, look at an audio level meter while playing Phil Spector "Wall of Sound" records from the mid-60s. When I'd encounter one of these on an oldies station, we'd joke that it was a good substitute for a steady 1000Hz test tone used to set levels along the audio chain. But Spector wasn't going for "fidelity", he was going for a hot sound on an AM car radio -- and he got it.
The entire chain is impure, of necessity and desire. Recorded music -- CD, vinyl, whatever -- is processed by the recording engineer to even out and capture the performance. Then it is manipulated by the recording medium -- tape recorders apply a "curve", for instance to deal with magnetic tape limitations. Then it is AGAIN manipulated by the pressing plant.
For instance, to reduce noise the RIAA curve applied when mastering to vinyl; it tilts up the high frequencies, which the RIAA-compliant playback system tilts back down -- the main reason a turntable must be plugged into a phono input (the other being to get extra amplification of the weak signal). The problem is, even small variations in RIAA-curve compliance on either/both sides result in further loss of fidelity. Fortunately CDs skip this step. A similar process is used with Dolby and other noise reduction systems.
Radio and TV stations further apply processing, both to level-out different audio sources, and (especially in the case of radio) make the signal "louder" to ride over reception noise and to "jump out" of the radio dial. Stations typically feed the studio output "audio chain" into a real-time multi-band compressor, which splits the audio into several frequency bands (5 or more), then levels the volume in that band, then puts it all back together to feed the transmitter. Bass-shy recordings get more bass and sound more balanced mixed in with bass-heavy recordings, which get their based reduced -- dynamically as it plays. This is the biggest reason why home-made mix tapes/CDs/iPods/MC-playback don't sound like a radio station. (In my radio station days we'd spend hours tweaking the audio chain to get the perfect "punch", so my wish-list for MC's DSP is to have a radio-style compressor plug-in.)
I'm simplifying, of course, but the point is that "true" fidelity is lost the instant the sound enters the recording microphone. Everything after that is compromises, many of them necessary.
All of this said.... Today, few people seem to encounter, know, or even care about anything close to High Fidelity. Witness ridiculous bass systems in cars, and the bizarre claims of "quality" applied to 3 and 4 inch speakers -- impossible due to the physics of audio. Then there's 128Kbps MP3, which would hurt even Phil Spector's ears.