And, I should add... Better performance (particularly sustained write performance) makes the full-restore from backup, should it ever be needed, faster too.
I only write this to make the point that "only lose a subset of the data" has been, for me, a bit of a red herring benefit. To each their own, though! I can certainly see both sides, and there are situations where hardware RAID is very inconvenient.
Also...
Its a second layer of safety, which at the very least reduces the downtime, or at the very best protects you from having to re-rip 200 Blu-ray's (because you're cheap like me, and keep the opticals as the backup, instead of duplicating the entire storage).
Yeah, RAID is not a backup in any way, and if you're going that kind of route for backup, then I can see that being the only tradeoff that makes sense. Backing up to a system like that is a non-starter for me, though. The vast majority of the data I care about the most doesn't have an optical backup (photos, recordings I've made, home movies, etc).
And... Again, downtime.
Write performance can be a major factor in a couple of ways, but for restoring from a monolithic backup, it can be a massive savings of time. I bet the time savings, if any exists at all, between restoring a subset of my data (from a matching single-disk backup) and restoring my entire RAID volume (from a same-size external striped RAID set) is minimal once all factors are accounted for. Again, assuming I'm not willing to deal with with a bunch of individually mounted volumes on my server, which I'm not.
So, what I have is essentially RAID5+1 where the +1 is manually created mirrors on a schedule to otherwise protected drives. Yes, to fit it all, my backup volume is a RAID-0 striped array, but it is only a two-disc set, and it is only for redundancy (and there are two of them, cycled in rotation).
So... It is a compromise, either way, agreed. You have to decide what makes the most sense to you. I just wanted to point out that the often-cited "downside" to hardware RAID (lose too much, and you lose it all) isn't really always worth the tradeoffs needed to avoid it.
Also...
And from where I'm standing, no file system will ever implement the flexibility I have right now. All those fancy file systems are usually limited to the same conditions an actual RAID gives you (same size drives, primarily).
That may be (almost certainly is) true.
However, the latter statement is wrong. btrfs does that without issue. Not that btrfs is the be-all-and-end-all. There are problems with it. The filesystem itself is pretty stable and well understood, but the tools that go with it are still pretty beta quality. And, it isn't anything that would ever be integrated into Windows or OSX as-is.
I really hope Microsoft gives us a good solution. I'd prefer, in a dream scenario, that btrfs (or something like it) just "took over the world" so that the filesystem could be truly cross-platform. But, that's probably a pipe dream (unless Apple surprises everyone and replaces HFS+ with it, rather than rolling their own, but that is also pretty long-shot).
But, in any case, btrfs can do RAID-equivalent striping with parity with non-matching drives. When you build a RAID in btrfs, it doesn't raid across the volumes. It never uses the volumes directly. It operates at the level of "chunks". These are 1Mb (I think that's the default size, anyway) "virtual volumes" essentially. It then stripes across these (and keeps track of which physical volumes they're hosted on, rotating through physical volumes as needed to preserve parity). That way, it can operate across non-matched drives without sacrificing a ton of "spare area" (like hardware RAID does and even things like FlexRAID and Drobo's solution often do). And, you can "dial in" exactly how much parity you want, since it isn't limited to or linked to physical drive count.
More important to me, however, is the fact that it protects filesystem data instead of just filesystem metadata. That's the major flaw in my, and essentially everyone's, backup system using currently available, friendly filesystems. My backup system will happily back up corrupt file data, and give me no way to "get it back". NTFS journals the filesystem metadata, but not the data itself, and gives you no way to detect "spontaneous" changes to data on disk. That's terrifying. I have some mitigation due to my rotation scheme (which has its own benefits), but... Shudder. With a system like btrfs, it doesn't matter because:
1. It would detect the data corruption.
2. You can unwind even intentional (user-initiated) writes, like Apple's "Time Machine" (but not based on flaky hacks and corrupts-itself-at-the-drop-of-a-hat HFS+). And this ability to unwind data writes is automatic with every single write (data writes are journaled), if configured as such. With sufficient spare area, you can "roll backwards" all the way to the moment the drive was first formatted.
All the other benefits of a modern filesystem really "change the game" for backups. It might not be flexible in the "traditional" ways you had before, but it is flexible in a whole set of new ways that are hard to consider when trained to think "classically".