I think you are still missing the point. You wrote:and you linked a Lifehacker article which stated:Which is exactly what I said in my previous post.
Theoretically, with way more RAM than you will ever use, the pagefile should never be used - but it is.
The fact that Windows (and some software for whatever reason) expects the Page File to be there, and reacts poorly when it finds an unexpected condition is not evidence that the use of the Page File on disk in non-RAM-limited situations
impacts system performance. Like I said, there IS some additional disk activity when the page file is on disk, but the writes are TINY and bookkeeping related (and can be done in the background where they don't impact latency). But there is also additional CPU and memory subsystem load for running a RAM disk. Which is worse? I'd trust the guy who knows the kernel inside and out to know what he is talking about.
In fact, the most common problem you'd ever encounter if you turn off your page file is running out of memory when you run something unusual that
does require a bunch of RAM. In those cases, using a RAM disk would hurt not help, because the total available commit capacity of the system is lower. Lifehacker doesn't tend to get into the nitty gritty details as much. Have you read the entire Mark Russinovich series? Because I have, and I saw him speak while he was in the process of writing them.
I'm a video editor. The system I'm writing this on right now has 64GB of RAM, and
certainly actually uses it. We're a very concerned bunch, generally, about memory performance. And some of the imaging systems down the hall that I use to generate videos (from EM systems and other more esoteric microscopy systems) have even more absurd memory demands.
But that's why I know a lot about both the "suggested fixes" that are all out there, and why many (but not all) of them are snakeoil. Your other example from the extreme overlockers thread is a
perfect example...
I've read TONS of those
kinds of comments used as "evidence" of the value of a particular tweak...
Correlation does not equal causation, and anecdotal evidence isn't evidence. I have NO IDEA what that guy's system might have been doing to cause his issues, and his "solution" may or may not have had anything to do with the problem at hand.
Just to take that example... His issue could have just as easily been cause by malware that went dormant for some reason (maybe because it "saw" the system changes, and was programmed to go dormant if it looked like a tech might be banging on it, to avoid detection). It could have been hardware problems on his disk, that he "avoided" by moving the page file off of the failing disk. Likewise, there could have been SATA driver problems on the system (well documented with AMD chipsets) that caused generalized disk access problems (again, avoided by moving the page file off of disk, but you'll eventually hit those problems anyway). Or, like many users, he might have been doing multiple "troubleshooting" steps at once to solve the problem, without a systematic approach, and now he "thinks" his problem was solved by the RAM disk, when really it was the GPU driver update he did, or a Windows update that happened, or that the disk got defragged in the background, or who knows what else.
There's way too much variability in those kinds of examples to glean anything useful from them, other than... Cool story, bro. Glad your problem is solved.
If it isn't documented and repeatable, then it doesn't exist (or might as well be placebo). It
certainly doesn't suggest a "general" suggestion of "if you have unused RAM, this is a good idea".
Like I said, I've seen countless, repeatable, documented demonstrations showing that system-wide, this type of "fix" does not help
general system performance (with numbers from multiple tests and a process to replicate). Like I said, can you cherry pick one benchmark example? Sure. Software is broken all the time. Find better software, or help the vendor fix the issue. If you start applying targeted "fixes" like that without a very rigorous systematic approach, you're as likely to spin into a situation where you're "fixing" one problem over here, while creating 12 new problems over there.
PS. And, it depends what he means by microstuttering, but "real" microstuttering is usually caused by the GPU design or drivers. It isn't even a very precise use of terminology.