[blfs-dev] New page
zarniwhoop at ntlworld.com
Wed Feb 8 18:00:15 PST 2012
On Wed, Feb 08, 2012 at 04:46:23PM -0800, Qrux wrote:
[ confining my remarks to SW RAID, I have no experience of the other
> SW RAID is great. Generally faster (at least for RAID-0, back when I used to benchmark this sort of thing). But, to be fair, while SW has the benefit of being open-sourced, it does suffer from version skew, too. I have no idea if new kernel versions make old MD devices unrecognizable, or if everything is always backwards-compatible. That's worth finding out & mentioning. And, even if the kernel is currently backwards-compatible ATM, who's providing the guarantee that newer versions will also be? Sure, it's open-sourced, but, realistically, most RAID users aren't going to be able to kernel-hack (driver-hack) in the event that the kernel eventually deprecates a version of the MD driver. To me, that's just as bad a problem as not being able to find the same HW card.
I've used SW RAID-1 for several years : my impression is that the
change happens in mdadm, rather than the kernel, and that (so far)
backwards-compatability has been a major consideration.
> It's also worth saying that in software RAID, you have to shut down the machine to do any repairs, even if the array is running in a degraded state. Unless you have PCI- or SATA-hotplug in your kernel (is this widely supported or stable)...and even then, you'd have to be able to put those drives in a hot-plug bay.
> Might also want to mention hot spares.
> And...(again, still trying to be constructive, not a jerk)...a page about RAID absolutely has to have a recovery HOWTO. It's just dangerous not to include it, lest someone get a machine running, and has no idea how to recover from it. And, in addition to the "normal" recovery scenarios, point out how it might be worth using with udev (disk/by-id) long names lest they reorder devices (or the kernel does it on a version change). I personally just went through this the hard way on a colo server...
A recovery HOWTO might be useful (for RAID-1, the hardest part is
actually making sure you have identified the bad drive - using
different brands of drive [ if there is a choice ] can help!). For
RAID-5, I've avoided using it - if it was something I dealt with
regularly, I'm sure it would be fine, but for something (recovery) I
only ever do infrequently, I've seen too many reports on lkml where
recovery has been non-obvious to a layman. OTOH, wrong information
in a HOWTO is probably worse than none.
What surprised me is that /etc/mdadm.conf isn't mentioned. I
thought I had to create this (either manually, or by running some
command - I forget which), and without it the kernel cannot assemble
the array(s) ?
Looks a good addition to the book.
das eine Mal als Tragödie, das andere Mal als Farce
More information about the blfs-dev