INTERACT FORUM

Please login or register.

Login with username, password and session length
Advanced search  
Pages: [1]   Go Down

Author Topic: Using a RAMdisk for buffer space on a JR client machine? (SSD wear & tear)  (Read 1654 times)

rudyrednose

  • Regular Member
  • Galactic Citizen
  • ****
  • Posts: 344
  • nothing more to say...

I am using HTPCs on each TV in the house, all running JR Media Center in client mode and connecting to a JR Media Center Server. 
I depend on JRiver for everything, as my TV is OTA.

Two of those HTPCs are small first generation i3 NUCs, with a 120GB mSata SSD and 8GB RAM. 
All HTPCs have a SSD system disk and at least 8GB RAM, and all are running Windows 8.1.

My concern is useless wear and tear of the SSDs.  Flash memory has a finite number of write cycles. 
As the client machines always depend on the server for media files, it would be nice to use “disposable” storage for buffering.

So I wonder, could I split my 8GB RAM and create a RAMdisk for Media Center client’s temporary buffer?
What would be a sufficient size?  1GB? 2GB? 4GB?  Obviously I want the smallest RAMdisk that would ensure correct operation in client mode. 

No transcoding is ever done on the HTPCs, but I occasionally use Gizmo or JRemote to control a client (for music).

And how should I configure JRiver on the client machines?
Is Options/File Location/Program Files/Temporary files the only location I would need on the RAMdisk or do I also need to specify the 3 Conversion Caches locations?

Am I missing something?

Cheers !
Logged

BryanC

  • MC Beta Team
  • Citizen of the Universe
  • *****
  • Posts: 2661
Re: Using a RAMdisk for buffer space on a JR client machine? (SSD wear & tear)
« Reply #1 on: November 13, 2014, 04:52:19 pm »

Quote
Am I missing something?

I'm fairly certain that all I/O during streaming from a Library Server is handled in memory by default.
Logged

glynor

  • MC Beta Team
  • Citizen of the Universe
  • *****
  • Posts: 19608
Re: Using a RAMdisk for buffer space on a JR client machine? (SSD wear & tear)
« Reply #2 on: November 13, 2014, 04:56:16 pm »

Am I missing something?

You're missing real-world perspective on SSD longevity:
http://techreport.com/review/27062/the-ssd-endurance-experiment-only-two-remain-after-1-5pb

If you do this, you will create real world problems to solve a fictional problem that will almost certainly never actually impact you.  Don't be fooled by the fact that they were able to make them fail.  They failed at a rate where, even the worst of them (an Intel 335), would take years and years of constant writes (in GB per day) to die.  It took 750TB of writes before it checked out, and even that seemed to be due to a "timebomb" in the firmware, and not because the NAND was actually exhausted.

750TB means you could write 420GB per day for five consecutive years before hitting that mark.  And that was the worst one tested.

The other electronics (like the controllers) are going to die way before this.  If you do get one that it happens to, and the overallocation can't save it, then you got a faulty drive, which could just as easily happen with a spinning disk.
Logged
"Some cultures are defined by their relationship to cheese."

Visit me on the Interweb Thingie: http://glynor.com/

BartMan01

  • MC Beta Team
  • Citizen of the Universe
  • *****
  • Posts: 1513
Re: Using a RAMdisk for buffer space on a JR client machine? (SSD wear & tear)
« Reply #3 on: November 13, 2014, 05:27:55 pm »

My concern is useless wear and tear of the SSDs.  Flash memory has a finite number of write cycles.  

I would ask why you are concerned about this.  The studies I have seen show that modern SSDs will last at least as long as most modern spinning hard drives.  Here is one: http://techreport.com/review/26523/the-ssd-endurance-experiment-casualties-on-the-way-to-a-petabyte

Edit:  Looks like glynor beat me to it...
Logged

rudyrednose

  • Regular Member
  • Galactic Citizen
  • ****
  • Posts: 344
  • nothing more to say...
Re: Using a RAMdisk for buffer space on a JR client machine? (SSD wear & tear)
« Reply #4 on: November 13, 2014, 06:29:10 pm »

First of all, thank you all for your answers.  They are appreciated.
I hear you, practically not a problem.  But for sake of discussion:

Quote from: BryanC
I'm fairly certain that all I/O during streaming from a Library Server is handled in memory by default.

That I like to read !

Quote from: glynor
If you do this, you will create real world problems to solve a fictional problem ...

Flash cells: infinite #reads, finite #writes. Failure rate increases with number of writes.
DRAM cells: infinite #reads, infinite #writes (for all practical purposes).
I have experience using flash from FPGA serial prom to embedded system memory.  They ARE less reliable than RAM.

The least mileage I put on the SSD, the longer the simpler NUCs will remain convenient media appliances I put no more money in  :)  
(wishful thinking)

But glynor, I fail to see the problem apart from reduced system RAM.  And I have no spinning disks.

And the article you mention is flawed.
They all exceeded their endurance specifications early on, successfully writing hundreds of terabytes without issue. That's a heck of a lot of data, and certainly more than most folks will write in the lifetimes of their drives. ...  while the 335 Series croaked at 750TB.

Run the numbers: 750TB of writes divided by a free space on 80GB on my mSata SSDs is only 9600 full cycles of 80GB writes !
And this: http://www.anandtech.com/show/6388/intel-ssd-335-240gb-review/2 mentions those chips (Intel 335 series) are rated at 3000 program/erase cycles per cell (page actually) !

Food for thoughts ?
Logged

glynor

  • MC Beta Team
  • Citizen of the Universe
  • *****
  • Posts: 19608
Re: Using a RAMdisk for buffer space on a JR client machine? (SSD wear & tear)
« Reply #5 on: November 13, 2014, 07:32:57 pm »

I know how NAND works in great detail.

And the article you mention is flawed.
They all exceeded their endurance specifications early on, successfully writing hundreds of terabytes without issue. That's a heck of a lot of data, and certainly more than most folks will write in the lifetimes of their drives. ...  while the 335 Series croaked at 750TB.

Run the numbers: 750TB of writes divided by a free space on 80GB on my mSata SSDs is only 9600 full cycles of 80GB writes !
And this: http://www.anandtech.com/show/6388/intel-ssd-335-240gb-review/2 mentions those chips (Intel 335 series) are rated at 3000 program/erase cycles per cell (page actually) !

And the actual real-world tests showed that they exceeded their rated capacity by many, many times.  Also, you fail to consider wear leveling and over-provisioning.  You did not explain how the test is flawed (which it isn't, and they showed their methodology in great detail, and it has subsequently been repeated elsewhere).

9600 cycles of 80GB writes is still:

420GB per day for five consecutive years before hitting that mark.  And that was the worst one tested.

If you plan to write 800GB per day to your drive (completely erasing and re-writing it 10 times per day) then maybe, just maybe, it would only last two or three years.  Of course, if you're doing that, you can probably afford to replace them yearly or something, or to get an SLC drive.  You'd probably need a PCIe SLC drive anyway, to push that much data through in a day (I could do the math, but I don't care because this is an absurd argument).

If you want to look up why RAM disks are (1) flaky, (2) absurd in the age of SSDs, and (3) pointless on modern operating systems, I'll leave that as an exercise for the reader.  I've posted about it before with charts and graphs and tests and explanations from experts.

Suffice to say that if you do it, we won't help you with all of the weirdness and data loss you'll almost certainly encounter.
Logged
"Some cultures are defined by their relationship to cheese."

Visit me on the Interweb Thingie: http://glynor.com/

glynor

  • MC Beta Team
  • Citizen of the Universe
  • *****
  • Posts: 19608
Re: Using a RAMdisk for buffer space on a JR client machine? (SSD wear & tear)
« Reply #6 on: November 13, 2014, 10:18:54 pm »

I should also mention: A far more sensible method to "protect" an SSD that gets heavy writes (which yours probably does not, but if you're using it as a MySQL server too or something then maybe) is to just add more over-provisioning.

All remotely-modern SSD controllers, certainly since the X25-M, will automatically use any unformatted space on a disk as additional spare area for wear leveling and NAND cell swap-out purposes.  So, as recommended by AnandTech, if you're really concerned about it, just partition your disk so that 20% or more of it is left unpartitioned and unformatted.  This will increase longevity (probably to an absurd degree) and level out performance of the drive as well (which is more important with smaller drives that use fewer channels in the internal NAND array).

Otherwise, I'm keeping this locked because we don't want discussions of snake oil "fixes" here that will lead users to do unsupportable and problematic things with their setups.  If you insist on going down this road, you're on your own, and ask for advice elsewhere.
Logged
"Some cultures are defined by their relationship to cheese."

Visit me on the Interweb Thingie: http://glynor.com/
Pages: [1]   Go Up