It much more obvious now when a video has lower quality. Especially in dark scenes I often see the effect that looks like smudging or smearing. Imagine a guy running through a dark alley at night towards a lit street. You see the guy running and moving with the streetlights in the back, but the parts that dissapear into the darkness remain stationary.
This is a temporal compression artifact.
Most modern video compression (those for delivery, anyway) use temporal compression. If you imagine each frame of a video as a separate photo, then compression on video works generally like this:
Uncompressed: Each frame is essentially like an uncompressed TIFF (in a funky color space). For a 1080p stream at 24p, this works out to roughly 142 MB/s (24 frames *1920*1080*3 channels = 149,299,200 bytes per second, not including sound or metadata).
DV and AIC: Each frame is essentially like a JPEG of the same TIFF from the uncompressed video stream. Many non-temporal codecs (like DV) also use color space compression to achieve higher compression ratios (the color data is stored at a lower resolution than the black-and-white luma channel).
MPEG-2 (H.262) and MPEG-4 (ASP and H.264): Frames are still compressed like described above, but are also no longer encoded independently. Instead, the codec stores keyframes at regular intervals then only the
differences between these key frames. The idea is that in most footage, only small portions of the frame change from one frame to the next. The sky and background stay relatively static as the guy in the foreground walks across the screen. More advanced codecs (like H.264) do a better job at predicting these changes than older ones like ASP (XviD) and MPEG-2 (DVD). This kind of compression is called "temporal" compression, and it is very common with all modern delivery codecs.
Unfortunately, to achieve high compression ratios, the temporal codecs have to use some fuzzy math about color and frame changes. This is, much like MP3 compression and JPEG compression, designed to be invisible to the human perceptive system. However, there are limits to these capabilities. As you increase the amount of compression, you become more likely to be able to see artifacts from the compression, particularly in the shadows (which the codecs are tuned to "destroy" first). If the video is compressed with a fixed bitrate (rather than a VBR scheme) you will also see issues in frames where "lots of action" is happening, or where there is a "busy" background with lots of changes (a guy walking across a busy street in NYC, for example). Slow fades are one of the most difficult things to compress temporally, because each pixel of each frame is different from the one before it.
There are some things you can do to smooth out the video (deblocking and whatnot) but you are essentially just further manipulating the data that is there. The only way to "hide" these details in the shadows is to blur them, so that the lines don't appear so sharp to your eye. You can't reconstruct data that isn't there. You can try to make smart guesses about what
might have been there (and some algorithms do), but it will still be blurring and guessing.
The reason I mention all of this is to make this point:
If you apply these kinds of corrections to your "bad" videos, and it looks better to your eye, then great.
However, it is a blur effect. You don't want to apply it to videos that are already in good shape, as you will be
reducing their quality. Hence:
"You can't make chicken salad out of chicken sh*t."