As for the Port Multiplier, I can't spend any more cash on hardware this year. I'm using the Addonics HPM-XU that came with the case. They recommend to leave the dip switches in factory default JBOD mode and use the JMicron utility to set up and manage the RAID.
I forgot about that.
Yes, of course, you can just use that. And, I'd generally agree with them. It is probably easier to just set up the RAID in software, and with a system like that, the performance will likely be identical.
The main issues you'll have to deal with, if you decide to do it, are:
1. All drives in the volume should be (roughly) the same size and performance characteristics. I explained before, but if you add 3 1TB drives and 1 4TB drive, the extra three TB on the big drive will be wasted. Similarly, if you have a bunch of fast 7200rpm 64MB cache drives, and one slow 5400rpm job with 16MB of cache, the whole array is going to perform according to the "weakest link".
2. Avoid the common "green" drives, and also avoid WD Black edition drives (though this is less likely to be a problem).
To expand a bit on the problems with certain drives...
Basically the deal relates to error conditions:
Consumer drives assume that there is no backup and that they are operating "alone". To this end, if they hit an error on disk (where data is difficult to read), they keep trying over and over and over again until they either get the data, or the user decides the application (or whatever) has crashed and forces it to stop.
RAID-edition and enterprise drives have a different mode where they time-limit the error recovery "dance" that the drives themselves do. They do this because the assumption is that you're using the drives in a RAID system that has its own parity information. In other words, the volume can recover, so there's no need to bog down the drive that can't get a few sectors for minutes at a time. Plus, a drive that can't read some sectors is worrying in an always-on massive volume, so you'd probably want to replace that drive. To this end, most RAID implementations only wait around 8 seconds for requested data to come back from a drive, before they "give up" and mark the drive as bad.
These two different assumptions conflict.
The problem, in particular with the Green drives, is that these drives are optimized for idle and power usage. They return to sleep very quickly, and spin up very slowly. Plus, as aerial density (storage sizes) has ballooned, it has become more and more difficult to get those reads on the first "try" (it is a bit like shooting a bullet from your driveway and hitting someone standing on the surface of Mars). The "green" drives, in particular, just aren't that concerned with responsiveness. So, what happens is:
1. All the drives have gone to sleep.
2. Your RAID controller suddenly says "give me this data".
3. The drives start spinning up, but they take a long time, and then the extra vibration of multiple drives in close proximity all starting at once makes it take longer (or makes the first "try" at reading unreliable).
4. 8 seconds elapses
5. The RAID controller thinks the drive is bad and marks it as such, and then starts rebuilding the Array from the parity data (assuming you have a spare available) or it marks itself as "dirty" and complains.
The truth is, if the RAID controller had waited another 8-10 seconds, everything would have been fine. But they're not optimized for that kind of high-latency operation.
Where you can get into BIG trouble with this kind of situation is if it happens twice in a row. So, for example, steps 1-5 above happen, and then the array is "degraded" (this basically just means that one more drive failure means the whole thing is lost). Assuming you have a hot spare (or you pull out the "bad" drive and pop in a replacement), it will automatically start rebuilding itself to correct the issue.
The problem is that rebuilding an array means essentially reading every single sector in the array (every used sector of every disk). That provides a TON of chances for the same thing to happen again. If it happens again while the array is "degraded", then the whole thing goes tits-up and you lose the data.
Bad news.
So, what's the solution? Well, there are "special drives" that are tuned for multiple-disk enclosures and are set with this "time limit" on their error recovery mechanisms. They wake from sleep quicker, and have special mojo to make sure they do staggered spinup when they are sleeping, and the whole bit. But, most importantly, they don't keep trying forever to recover data that might be hard to read. Instead, they try for a little bit, and then report the failure back to the RAID controller, where it can decide what to do.
The obnoxious thing is that all of these features (or almost all of them anyway) are simply software features implemented in firmware. Physically, the drives are no different from the cheap ones. They just have different software tuning. Back in the day, you could buy drives (like the 1TB WD Blacks, originally) that shipped in "consumer mode", but with a simple command line tool, you could enable "RAID mode" in the firmware.
But drive manufacturers (particularly WD) discovered that they could charge more for these features, so they locked the consumer drives out of that. The net result is that if you have a modern WD Green or Black drive, they're "locked" in consumer mode, and really aren't safe to use in a RAID. The Black drives are much safer (I run a 5 drive RAID array with 2TB WD Black drives and I've NEVER had an issue) simply because they're faster and probably get more robust validation at the factory. The Green drives walk the edge of slowness that could make them drop from an array at random, even when they're perfectly fine (the data would eventually be able to be read, it is just going to take a while). Don't use the modern Green drives in a multi-disk array.
Other manufacturers didn't optimize so much, or saw the opportunity afforded by WD's expensive RAID edition drives, and they started enabling their regular consumer drives to work correctly with multi-disk arrays. Samsung and HGST both did this in the past. Unfortunately, both Samsung and HGST's storage groups have now been gobbled up by the "big two". HGST drives are now WD drives, and Samsung drives are Seagates. They've both been quietly discontinuing the old models to avoid competing with their "RAID edition" (price gouging) models.
However.... WD in particular released the new Red line of drives. These are essentially Green drives, slightly tweaked to work better in a multi-disk volume. Anandtech did an in-depth review, and gave them great marks all around. They are substantially slower than the Black and RT ("high end" WD RAID) drives, but they are plenty fast for NAS and Media Storage use, and they are reliable in a NAS box or RAID Volume. They do the TLER (time limited error recovery) and some sleep/idle/spinup optimizations just like the faster (and much more expensive) RAID-specialized drives.
If you have older, existing drives, they'll probably work fine as long as you don't use the common WD "Green" line of cheapo big drives. So, I'd avoid using your WD20 EADS 00R6B in the array. On the other hand, the Samsung drives (despite being their EcoGreen line), are reported to work just fine in RAID volumes (though I did see something about them having a Firmware update available which could actually be the cause of all of your trouble).
Basically, a rule of thumb is:
WD Green: AVOID
WD Red: Good to go
WD RT-something: Good to go (but you have more money than sense)
WD Black: Will probably work, but is slightly dangerous.
Samsung and HGST: Almost certainly good to go.
Seagate: I don't know their models well enough to comment. They probably have a structure like the WDs.
In all cases,
RAID is not a backup.