[blfs-dev] New page

Bruce Dubbs bruce.dubbs at gmail.com
Wed Feb 8 20:46:32 PST 2012


Qrux wrote:
> Nice page.
> 
> Just recently been dealing with a HW card that udev doesn't like
> (doesn't have SERIAL or SHORT_SERIAL), so I've been thinking about
> this a lot recently.
> 
> * * *
> 
> As for pros v cons...
> 
> I would think the main advantage to modern HW RAID systems is the
> ability to hot-plug.  

That's true, but in my experience the procedures are very much vendor 
dependent.  Other than mentioning that, I don't see what we could do.

> SW RAID has many advantages, but being able to
> detect a failure, and then physically see, replace, and rebuild a
> degraded array while the array is alive has absolutely got to be the
> unequivocally primary benefit of a complete HW RAID setup.
> 
> SW RAID is great.  Generally faster (at least for RAID-0, back when I
> used to benchmark this sort of thing).  But, to be fair, while SW has
> the benefit of being open-sourced, it does suffer from version skew,
> too.  I have no idea if new kernel versions make old MD devices
> unrecognizable, or if everything is always backwards-compatible.
> That's worth finding out & mentioning.  

What I wrote was intended to be introductory only.  There are a lot of 
ways to use RAID and there are lots of tutorials available.  I don't 
think it would be terribly useful to reproduce what is already available.

> And, even if the kernel is
> currently backwards-compatible ATM, who's providing the guarantee
> that newer versions will also be?  Sure, it's open-sourced, but,
> realistically, most RAID users aren't going to be able to kernel-hack
> (driver-hack) in the event that the kernel eventually deprecates a
> version of the MD driver.  To me, that's just as bad a problem as not
> being able to find the same HW card.

That sounds hypothetical to me.  mdadm supports ate least three 
metatadata formats, 0.90, 1.0, and 1.2.  In my experience Linux is a lot 
less likely to drop old formats than proprietary software.

> It's also worth saying that in software RAID, you have to shut down
> the machine to do any repairs, even if the array is running in a
> degraded state.  Unless you have PCI- or SATA-hotplug in your kernel
> (is this widely supported or stable)...and even then, you'd have to
> be able to put those drives in a hot-plug bay.

And you need the hardware to support that.

> Might also want to mention hot spares.
> 
> And...(again, still trying to be constructive, not a jerk)...a page
> about RAID absolutely has to have a recovery HOWTO.  It's just
> dangerous not to include it, lest someone get a machine running, and
> has no idea how to recover from it.  And, in addition to the "normal"
> recovery scenarios, point out how it might be worth using with udev
> (disk/by-id) long names lest they reorder devices (or the kernel does
> it on a version change).  I personally just went through this the
> hard way on a colo server...

A link to a more advance page may be useful, but a full HOWTO is beyond 
the scope of what we are trying to do.

> Also, it's probably worth mentioning that most HW RAID setups
> (provided they have a Linux driver) make recovery much easier.  Just
> slide a new drive in, maybe issue a command or two, and the card's
> firmware will take care of the rebuild, all while the machine
> continues to run.  With mdadm, I think the whole recovery process is
> harder (or, more involved & dangerous to someone who might be new).
> 
> Finally, might want to combine the RAID and LVM pages.

No. RAID and LVM are separate concepts.  I was going ot write a page on 
'About LVM' though.

   -- Bruce



More information about the blfs-dev mailing list