Michael123....long story to your short Question.
I've been on a raid learning curve for a while. Started with an Adaptec 7 series card, sas2 6gbs, and progressed to 5 WD Black 5TB drives stuffed in a dell case, level 5 raid. Gave me basically 16TB usable storage space, which I quickly filled.
I picked up a Norco 4U 16 bay SAS2 server chassis to hold more drives and it has two 5.25" bays, BluRay drive and a six 2.5" Icy Dock drive bay. Moved the Dell MB in and got 8 WD Red Pro 6 TB drives. Started building raid 5 - 50 & 6 configurations to learn on. Using the original WD 5TB drives in five of the 16 bays and a couple Black 6TB drive as "HOT SPARES", I started speed tests by transferring TBs of BluRays at time between drives and running benchmarks. Than I started yanking drives to see how long the rebuild times would be. The global Hot Spares would pick up instantly. They can be left or a new drive be installed in the failed location and rebuild will continue. So just like with cars I got speed crazy and want to see how fast I can get transfer rates up to ….. enter the 81605ZQ SAS3 12gbs raid card (RAC - raid on chip), Q is the MAX CACHE version which allows you to use SSDs as cache. I'm using five 256gb Samsung 850 Pros in raid 5 for basically a TB of cache. I currently have the two 30TB configurations ( 8 - 6TB drives each) in the screen shot on the same controller. One Raid 50 & Raid 6. I can move TBs between the two at the slowest speed of 750mbs on a bad day, I've seen it run over 1gbs for a while.
SO...………..To your question about 30 second spin up time...……..
I have the option to slow the drive after xxxx time of no activity and\or stop them. I like to just leave the system on and only reboot every 21 - 28 days under normal conditions, basically updates. So since I may not use the system for a few days I shut the drives down. The controller checks on them every 24hrs. I also have it set to spin up progressively, Instead of all simultaneously. This is easier on the PSU, start up pulls more load until up to speed. So It takes a few to wake UP. I never put a stop watch on it, I can if you want the data.
With all of the above info, I've been collecting parts and pieces to build a server workstation using a Supermicro X10DAX MB
and a couple Xeon E5-2650s I have. 126G ram, Intel® SSD DC P3608 Series 1.6TB NVME run in raid 0, Nvidia Titan Xp,
Adaptec 81605ZQ raid controller with 8 - 512gb Samsung 850 Pro SSDs for the Max Cache, (I found SSDs get slower as they fill, so raid 5 and short stroked for 2TB of cache, the most the controller can deal with).
I've decided Raid 60 is the best for my media storage and playback. I have 40 - 6TB WD RED PROs I can use. I'm on the fence...…. 32 = 16 x 2 configuration or 39 = 13 x 3 configuration. either gives me good safety from failure, 3 being better and faster, I think / Hope! 32 drives should be about 135TB of usable space, 39 about 159TB. Spin up time?
I think Windows 10 Pro can deal with a drive that large.
Anyway ….. learning curve - you never really know until you build it and beat the poo out of it.
OH... did I mention water cooled? Trying for as quite as possible with a bunch of spinning media, 10s of POUNDS of sound deadening material being applied to the chassis. The X10DAX mother board can overclock the Xeons a little.