Here's a question for you clever chaps....
Is there any advantage to frame rate doubling for deinterlacing?
Here's the background to my question:
Living in PAL land, I have a number of '50i' clips that clearly need deinterlacing. Now, combining the two interlaced frames will give a progressive frame every 1/25 s or '25p'. However, a standard framerate for TVs and monitors is 50p, needing a doubling of framerate. My question is, do we get an identical couplet of frames? Or do the modern videocards do some fancy frame interpolation and each frame is unique? So, by extrapolation, therefore is there any advantage to a 100p or 150p monitor (if they were available).
I note that 4k TVs like the new Sony offer 25p and 30p rather than 50p and 60p, presumably because of the HDMI bandwidth limitation, and wondered if this was a deinterlaceing disadvantage versus 50p and 60p.
If my TV had a 1080p25 mode (and there was no loss of deinterlace quality) it would allow me to run more intense MadVR upscaling algorithms too.
SBR