My intentions are to educate and assist users.
You posted with a database performance problem when using a 500,000 item Library. Later you revealed that you are running MC with this Library on a PC with just 4GB of memory, and pointed out that it had 2GB free. I pointed out that it only had 2GB free because Windows was swapping much of itself out to the page file, and that with that memory restriction you are not going to get amazing performance with your Library. I recommended that you upgrade the memory to at least 16GB.
I am commenting on your posts because they are in substance mostly incorrect, and I don't want other users to think what you are saying would improve performance. If I also enlighten you, that would be a bonus.
You have ignored the advice to upgrade memory, and continue to suggest ways that MC could work better, while not having a good understanding of what MC and Windows are doing. You continue even after Matt, a senior developer at JRiver, has clarified how MC accesses Thumbnails, and has stated that this area of MC is "pretty well optimized". I assume, therefore, that you think you know better than a senior developer of MC regarding how MC could be made to work faster.
I have never said that an orderly readout should need a lot of memory. Please don't put words into my mouth.
I have said Windows 10 needs more than 4GB when running an application such as MC with a Library as large as yours. While Microsoft claims Windows 10 can run in 1GB of memory, the IT industry generally recommends a minimum of 8GB if you want to actually use your PC for anything other than simple browsing, email, and so on.
Your simple example sounds like it makes sense, and it may to some degree, particularly with look-ahead caching of thumbnails,
where the thumbnail database is stored on a mechanical hard drive. On an SSD, where random reads do not depend on the movement of a read head, there is very little benefit. The benefit of doing sequential reads, if they aren't already being done by MC, rather than file displayed sequence, would be far, far less than the benefit of having more memory available. Also in my observations, MC doesn't load thumbnails in file display sequence. It loads thumbnails for whatever is visible, so a page of thumbnails, but it doesn't load top to bottom of that page. It seems somewhat random, more often from the bottom up, and probably based more on image size, hence load time.
Sequential read tests such as Buldarged shared above are contiguous sequential reads, and are therefore much faster, largely because they take advantage of look-ahead and buffering of files in fast RAM on the SSD. Accessing thumbnails from the database is not a contiguous sequential read process, even if it is a sequential read process, unless the sequence of files visible in a View exactly matches the sequence of thumbnails in the thumbnail database. That is a highly unlikely occurrence, and would only apply for one View. Any other View has a different sequence, so even sequential reads would be non-contiguous.
Your example is not a contiguous sequential read process.
Consider if your example View contained 4,000 files rather than 4, and the sequence was something like 9256, 7458, 3241, 2758, 9125, 7358, 3822, 2104, etc. Reading those files sequentially would be no better than completely random reads. There is no physical drive head to move, and while read-ahead caching for the image 3241 may result in image 3822 already being cached in memory when it was required, if there was enough memory. But there would be no guarantee that the image would still be in memory by the time is was required. If memory was limited, it may have already been paged to the drive to free up memory, and so the benefit of the read-ahead caching would be lost. If my sequence was repeated in a similar manner, and say 20 thumbnails were displayed in a View at any one time, then any benefit of sequential reading of thumbnails would be lost.
In a Library with 500,000 items, Views would contain far more than 4,000 files, and the gaps between images in the Thumbnails Database would be much larger, making the above situation far worse.
However, I have noticed that there is temporarily an extremely large memory requirement. This can overload the disk IO system.
This is the crux of the matter in your situation. While reading a large number of thumbnails will require more memory obviously, it will only require the memory to store the required images, plus a few extra thumbnails cached by the read-ahead caching, and a bit of overhead. Regardless of whether MC reads the thumbnails sequentially, it reads the individual image, based on the FileKey MC uses, the indexing in the Thumbnail Database, and hence the offset within the file to the specific location each thumbnail is stored. In order to display those thumbnails in MC, a certain minimum amount of memory is required. Your disk I/O system is being overloaded as Windows needs to page out a lot of memory to disk, and then pull it back again, constantly, whenever you do anything in MC.
Essentially, factors other than the read sequence of images from the Thumbnail database are more important in your situation. Factors such as available memory, Windows paging memory to the drive, and reading the page file back to memory, size of the Library, the quantity of thumbnails visible at any time in a View, read-ahead caching of images by MC, and size of images (hence memory usage).
I have no doubt that Thumbnailing in MC could be improved, particularly on modern SSDs that allow multiple simultaneous files reads. But the most obvious improvement would be to make MC even more multithreaded. However, that would require more processes to be running, and hence more memory to be available.