Jim and MrHaugen are correct in this. Applications like MC would see very limited performance benefits from going
64-bit, and could in many circumstances even run slower because of it.
The primary benefit that applications see from switching to a 64-bit executable, is eliminating the
32-bit memory limit (effectively 2GB per process). This is the main reason many high-end applications like Photoshop and AVID, which regularly work with very large files, have to go 64-bit. It is also why IE was ported to x64, but this had more to do with the poorly coded downloading engine than with any real performance benefit (the x86 version of IE can corrupt downloads over 4000 bytes). There are also benefits for applications that regularly need to memory map entire files over 4GB in size, which is difficult to do in a 32-bit environment. However, for simple playback, it is typically not necessary or desirable to map an entire 4GB file to memory, but is much better to do it in smaller and more digestible chunks (at least until mainstream desktop systems regularly have multiples of 4GB in RAM). This is the one place where MC could see some benefits from switching to a 64-bit file handling system, but even then, it would only impact you if you were using lots of 4GB+ files at the same time (not just tracking them in the database, but actually USING them simultaneously with the full files loaded in memory). I don't think we're to the point where the average user has 8-16GB of RAM available on their system though...
Native 64-bit applications also gain a few other benefits related to 64-bit registers, but these are only useful in specific circumstances. Primary among these is the ability to natively handle
"double-long" (int64) integers. The standard "word" size in a 32-bit x86 application is int32, which is able to hold signed integers up to +2,147,483,647 (or unsigned up to 4,294,967,295). The problem is that if you need to do math involving integers larger than those numbers, you suffer a performance penalty because the "double-long" data type doesn't fit into the CPUs registers (and more CPU instructions are required to separate the math into 2 or more 32-bit calculations). Applications that do a lot of math using very large integers, mainly encryption and compression algorithms, will often see a dramatic (3-5 times) performance improvement due to this simple fact. That's why you see applications like video encoders and TrueCrypt quickly adopting 64-bit native executables.
Lastly, there is a
very small performance impact from running the x86-32 emulation layer in a 64-bit environment (this is called
Windows-on-Windows, or WOW64). With older 64-bit technologies, this emulation layer could have a dramatic impact (such as the
old Itanium CPUs). These CPUs were completely designed from the ground up to handle 64-bit processes natively, but they weren't x86-32 compatible, so all x86 code had to be run in software emulation (originally the Itanium had hardware emulation, but it was terrible and broken). This was the genius of the
AMD x86-64 implementation, which completely eliminated most of these concerns, and is compatible on a hardware level with x86-32 instruction sets. In practice, the impact of using the
WOW64 layer in Windows Vista and later is incredibly small (because AMD x86-64, and Intel's clone of it, actually
IS a 32-bit x86 CPU, with x64 extensions which can be turned on and off as needed). So, on a modern x86-64 CPU, WOW64 is not so much of an emulation layer as it is an interface to the hardware features of the CPU which allows Windows to switch in and out of 32-bit vs. 64-bit modes. Also, keep in mind that large portions of Windows x64 are still written in x86 code (even on Windows 7 x64). The WOW64 layer itself will be running no matter what you do (there is no provision to "turn it off"). And, more importantly...
It is important to remember that there is a major disadvantage to coding your application in x64. Relative to 32-bit architectures the same data occupies more space in memory (due to 64-bit pointers, integers, other data types, and alignment padding). This can dramatically increase the memory requirements of a given process (and can have implications for efficient processor cache utilization). Unless your application needs to use a lot of int64 (or 128-bit floating point) math, the tiny performance benefits you gain from not running in the WOW64 layer are always overshadowed by the memory bloat (all those integers and memory pointers are now twice as big, even when you don't need them to be). In fact, it can be argued that even for these applications that do use a lot of 64-bit math, that a much better approach would be to code the application to do processing on the GPU, which would provide an even greater performance boost than simply using x86-64.
For most applications, a much better approach is a modularized approach. To "spin off" the few routines that do benefit from 64-bit processing into separate 64-bit DLLs (for MC, this would probably be the video re-encoding engine used to transcode video files for handheld use), but keep the primary application in 32-bit land to avoid memory bloat. This seems to be the approach that JRiver is taking, and for the foreseeable future, it is the correct path. Once we get to consumer-grade machines that regularly have 16 and 32GB of RAM, with cache sizes measured in hundreds of MB, then maybe the benefit analysis will be different, but we're a long way off from that.
EDIT: Clarified a couple of sentences and added some links for documentation.