Thank's for your input, I am aware that I am stubborn about lossless format but I will try out the advice you all provided here.....and maybe read more about lossless. But I can't understan how it works
how can you reduce a file to another format and compress the data, and then restore it again...and where is the converting taking place when listening.....I'm might be stupid but how can you eat half the cake and then have it again
The same way a ZIP file works. When you compress Word files and executable files and Excel spreadsheets into a ZIP file, they get smaller, but then when you open the ZIP file, they expand to be EXACTLY the same as they were before they were compressed. If the files had changed at all, they wouldn't work! You can't have a EXE file changing every time you ZIP it! That wouldn't work.
The
reason it works is because there is a LOT of redundant data in computer files of all sorts. For example, imagine a simple text file, like this text that I'm writing here. The way "normal" uncompressed text works is that the computer stores an ASCII code (or unicode, more commonly now) for each character in the file, and keeps them in a "string". That "string of codes" is what is written to disk when you save a plain text file. So, in ASCII, for each "A" character it stores the number 65, for each "b" character it stores a "98", for each "space" it stores a 32, and for each new line it stores a "10". When you read this "string" of codes back into the system from the file on disk, the ASCII interpreter can perfectly reconstruct your text block, with all of the spacing and blank lines and letters in their proper case and place. That's a simple, dumb, uncompressed encoding.
The problem is, that for each and every single "space" in your original text, the computer stores another 32. So, if you type out 15 spaces in a row, it actually stores 15 "32s" in a row in the saved file. Same goes for all letters and characters in your document. If you have 32 blank lines, it actually stores 64 separate character codes (32 carriage return codes, and 32 line feed codes).
But there is a more efficient way. Imagine if instead of simply writing out a string of character codes, you had the ability to "repeat" a character a set number of times. So, when the computer encounters that set of 15 copies of the identical spaces, instead of writing out "space", "space", "space", "space", "space", it simply says "space" x 15. And then carries on. When you blow it back up into real text, the file looks, works, and
IS exactly identical, but your "saved file" on disk is smaller because you "wrote it down" using a smarter method.
You can even do bigger tricks. Say the computer has a "dictionary" of common words, including stuff like "and", "the", "center", "tools", and "peanuts". It assigns a special "code" to each of these words. Then, when it is saving the file and it encounters a word in the dictionary, it just substitutes the "word code" for the whole word instead of having to write out the individual letter codes one at a time to spell out the word. Most text blocks re-use common words a LOT, so even little words can save a lot of space. Imagine how many times you use simple connector words like "and" and "the" in every paragraph you write. If you can cut the storage size of these words in half, then you can cut down the total size of the saved file enormously! And, again, when you "extract" the file by reading it back through the interpreter, it turns it back into
exactly the same file it was before you compressed it. In other words, the compression was "lossless" because it did not lose
any data at all. Not through magic, but through a more efficient "encoding" for disk storage.
Generally the tradeoff is this: the more efficient a disk storage means is (the more "compressed" you save your file) the more "difficult" it is for the computer to "unzip" it and put it back the way it was. It is very easy to read a string of ASCII codes. It is a little harder to know what to do when you have to look codes up in a dictionary and figure out what word to substitute as you are unzipping a file. The end result it the same, but it might take more time to read the file that uses the dictionary-based compression. In early days of computers, this mattered. The processor was slow, and the disk was relatively fast. It made more sense to store the data on disk using the "fastest" method, to save processor time when you had to read that file back.
But now the situation has reversed. Processors are screaming fast. Generally MUCH faster than what the average person needs for what they do. However, while disks have gotten faster than they used to be, they haven't advanced at anywhere near the same pace! So now, it makes a lot more sense to "waste" the processor time needed to save storage space. You have tons of performance to spare under that hood!
Lossless audio compression works exactly the same way. It analyzes the mathematical codes written onto disk, and it figures out a more efficient way to store the
same exact data onto the hard drive. When you "unzip" the file (which happens automatically upon playback), the player software turns that compressed file on the disk back into the same exact original file you had to start with. And here's a dirty little secret... WAV files are compressed too! They are
just compressed with a "stupider" (and much more analog) compression scheme than more modern formats like APE and FLAC.
Now, for media files (videos, pictures, and music) there is another kind of compression, called "lossy" compression. Lossy compression schemes include things like JPEG, MP3, and MPEG-4. Lossy compression is NOT like what I described above. Lossy compression analyzes the original source file, and creates a smaller version that is NOT exactly the same when you "play it back" (when you unzip it, it doesn't come back out exactly the same way it went in), but there is a mechanism in place to make sure it looks basically the same (or sounds the same, or whatever). This works because human senses are not perfect, but computer storage is perfect. So, for a photo, the computer may be able to reproduce 32 million different colors. However, when you look at a photo, your eyes can't tell the differences between all of the tiny little variances in the different shades of "red" for example. Sure, you can tell the difference between pink and crimson, but telling the difference between two slightly different shades of crimson, when the spot may only be 1 pixel wide, the two shades of crimson are only 1/400,000th of a percent different from each other, and there are literally millions of these pixels all mixed together? Your eyes "fudge" it and it looks the same. Your brain fills in the missing information. So the computer can just see those two slightly different crimson pixels, and say "well, those are close enough, just use the same color there". So, what lossy compression does is it "looks at" the file like a human, and decides what data is important, and what data can be "fudged" without impacting the way you will actually "see" it. This works for files that are "perceived" like audio and images, but it obviously doesn't work for executable files or text! You can't run text through an interpreter to turn my long diatribe above into "lossy loses data, lossless doesn't". It wouldn't really have the same impact!
This is lossy compression. It saves WAY more space than lossless compression (that's why a MP3 is way smaller than a WAV or FLAC file), but it also throws data away. That's fine in some cases, but sometimes you want the original pristine untouched original source. When you do, you want lossless compression, not lossy.The JPEG your digital camera saves onto the flash card might
look the same as the uncompressed image that comes off of the camera's sensor, but "underneath" it is NOT the same as the RAW file the camera sees.
So, that's the difference. Lossless compression (like ZIP, FLAC, WAV, and APE) are PERFECT. What you put in, is
exactly completely identical and indistinguishable to what you get back out. They just use a more
efficient means of storage. Lossy compression (like MP3, H.264, JPEG, and GIF) are NOT perfect. What you get out "looks and works similar" to what you put in, but they are not identical.