Why?
Without getting into the tedious details and the flawed design of ntfs compared to filesystems used on Unix systems, it is because it becomes harder for the system to find contineous free space to store a file, it needs to break it up and store it in different places on the disk. In the beginning of a disk becoming defragmented, this isn't an imediate problem but believe me, once it starts it goes fast, especially when running out of free diskspace.
The less free space there is, the harder this becomes for the system. Most systems won't suffer badly from a couple of fragmentations, but I have seen systems with hundreds of fragments on a single file. These systems would literally take minutes launching internet explorer or ms word. Clicking the print button would basically hang the system for minutes with a grinding noise from the harddisk.
I am currently dealing with a couple of Windows 2003 file clusters with 500Gb LUNs ('disks' on a SAN) that are running out of space and are heavily fragmented. Believe me, its not funny.
http://www.pcguide.com/ref/hdd/file/ntfs/relFrag-c.html is simple article on NTFS defragmentation and deals with the myth that NTFS doesn't need defragmentation.
A little more technical is
http://www.ntfs.com/ntfs_optimization.htm.