You are wrong.
The normalization process is a "stupid" volume level adjustment that just makes each track to have the same maximum peak level. It does not take the audio content into account anyhow and the perceived average volume levels will vary in an uncontrolled way. Its adjustment is done with DSP before the files are saved and it actually changes the audio content. Its possible effect to the audio quality (other than the change in volume level) is most likely inaudible because it uses high quality DSP, but nevertheless it alters the audio content and cannot be reverted just by changing a flag in the file header. It does not save any kind of header data or tags.
Replay Gain aka Volume Leveling in MC is a different beast. It does not touch the audio content. It analyzes the content and saves the results in the library database and in the file tags (when possible). The analyzer uses a sophisticated algorithm and really tries to imitate the human auditory system. During playback the saved values can be used for playback correction.
Alex is right (of course), though I think that he overstates the possibility that well-implemented normalization will affect the audio "quality" of any track. Computer audio files have a "range" of possible loudness that ranges from 0% to 100% (I'm simplifying here, but it works). However, a particular audio file that comes directly off of a CD may "peak" (the loudest part of the file) at 60%, while another peaks at 99%. What normalizing does is really quite simple... Basically, if you set normalizing to 98% (a common setting) it will scan through each input file and modify the output file by making the peak = 98% "loudness" and then shifting the remainder of the loudness of the file relatively.
So, for our file that peaked at 60%... Say that the loudest split second was 60% and the quietest split second was 19%. It would shift it to 98% - 57% (19% + 38% shift). For our file that peaked at 99%, say it was 99% - 32%, then it would be simply shifted down by one percent to 98% - 31%.
The one instance where this can impact perceived quality is with "clipping". Basically, say you have a file that is mastered perfectly (the loudest portion of the file is 100% and the quietest is 0.00001%), and then you normalize this to 98%. The problem is that at the bottom of the scale, you'll have shifted the quietest 2% "points" down below zero. The quietest portion of the song would now be -1.99999%, and effectively erased! This would be a real problem if there were lots of "perfectly" mastered tracks.
However, this
never, ever, ever happens for two reasons. There is, unfortunately, a "
loudness war" with most commercial music. There have been studies that have shown that on the radio, tracks that average "louder" tend to be more popular, and tend to sell better, while the converse is true for quieter tracks. So, commercial recordings have been slowly inching upwards on the percentage scale to the point where for most CDs, the dynamic range is extremely compressed. So, for your average song, the "peak" is at say 97% and the "low" is at 84%. The rest of the range is unused completely. The problem comes when you have one CD where the peak is 97% and the next where the peak is 84%. Both may only use 20-30% of the possible dynamic range, but one will seem much louder than the other. Normalizing fixes this, but can "clip" low-volume sections in extremely rare cases. The second reason this won't ever happen in practice is that a well-implemented normalization routine won't even do this because it will have "clipping protection". Clipping Protection will cause the normalization routine to skip the file or adjust the normalizing percentage if it detects clipping of any kind at the low end of the scale (so it would see that it was going to adjust the bottom of that extreme example I gave down to -1.99999%, and would then skip the file).
Now, can any human being actually hear that bottom 2% that might possibly get clipped off in the vary rare occasion that some old classical recording actually does use the CD's full dynamic range (and you were using a dumb normalization routine with no clipping protection)? No. Absolutely not. If some audiophile tells you they can, make them prove it with a true, blind listening test. It is psychological. However, fair is fair and lossless should really be lossless, so I would recommend against normalizing when ripping to FLAC. If you are going to go through all the hassle of ripping to a lossless format, you might as well really have a completely lossless file and not even risk clipping the files even in the incredibly rare one-in-a-million chance that it could happen.
As Alex indicated, MC's Replay Gain (Volume Leveling) works differently. It applies the changes on the fly and does not impact the audio waveform written to the file.
Okay, now that we've discussed what this is in detail... What should you do?
I think you've come up with a very good compromise:
1. Rip to FLAC with Normalization turned
off. If you rip in secure mode (and you might as well if you're ripping to FLAC), these files will be bit-perfect (meaning when you play them back they will be identical
in every way to the audio file on the original CD, at least until it gets to the DAC in your computer).
2. Use Replay Gain if desired when playing these FLAC files back on your computer. I recommend Album Based, rather than track based, which will preserve the relative dynamic range of the entire album (important for some well-mastered works, such as those by Nine Inch Nails and The Beatles, for example).
3. If your handheld player does not support an on-the-fly normalization/replay type of feature (the iPod and Sansa does, I don't know about the Cowon players), then enable Normalization when you encode to OGG for handheld use. That way you will protect your eardrums from being blasted out when it switches tracks on you while you run at the gym or whatever.