grub setup Error 22

Eric Stout stout at lost-soul.net
Fri Feb 24 00:36:25 PST 2006


On Thu, 23 Feb 2006, Alan Lord wrote:

>   > as grub won't be able to install to a boot sector of a "virtual" disk
> >
> > it needs a physical device. Grub won't have loaded any drivers to make
> > the raid array visable at boot time
> >
> > you need a single disk really.
> >
> > Matt
> >
>
> I installed grub on an old IBM Rackmount server with dual processors and
> a scsi raid setup over three disks. The raid was handled IN HARDWARE by
> the raid controller (You set this up through the controllers' BIOS).
>
> As far as FDISK was concerned it was just a big disk... I partitoned it
> how I wanted, built LFS, and Grub installed fine and booted the system
> perfectly...
>
> Al
>
> LFS#: 216

I see what looks like the beginnings of heated debate, so I'll drop my own
2 cents on RAID and booting in hopes of diffusing the situation.

On a _true_ hardware RAID system, the bootloader is only kept to the same
restrictions it normally is regarding location on disk.

On a fake hardware RAID system, which is unfortunately what most systems
provide now days (through use of the motherboards onboard RAID, for
example Tyan uses a Sil3114 SATA controller controlled by your choice of
Intel or Adaptec option ROMs...), drivers need to be loaded before the
device can be seen.  This means a kernel has to be loaded, and the kernel
isn't loaded until _after_ the bootloader is run.

Even on RAID 1 systems, the boot sector isn't truely raided.  The BIOS
reads the boot sector of whatever device it was told to.  This sector,
through hardware raid, can be copied across multiple disks to sync the
array _after_ boot.  It's a little trick for fault tolerance of the
primary disk, useful for low-budget systems operating RAID 1 for
redundancy between 2 disks, each housing the OS and data.

In this case, you have to install boot information on both disks in the
array (i.e. install grub twice).  It just so happens that this information
is synced between raid arrays after boot.  Before boot, there's two
physical drives independant of each other that just happen to have
identical boot information in the same place.

This only works because RAID 1 doesn't split data into blocks or chunks to
spread across disks.

Again, boot info on any _true_ hardware raid system should work just fine,
regardless of raid level, since a true hardware raid system is shown as a
single device to the BIOS.  Or should be, anyways.  With the way things go
anymore, I'm sure someone can bring up at least one true hardware raid
system out there that does not report a single logical unit to the BIOS...

I have mixed feelings about the actual logic of making the boot sector
fault tolerant by spreading it across arrays in chunks.  All raid
solutiojns except raid 1 crumble when 1 disk goes down.  The only
difference is that you can rebuild the array and get it back.  Thats a
pro..  On the other hand, if you need that kind of fault tolerance, it's
probably not a bad idea to keep a spare OS disk thats identical to the
metaphorically failed one on hand to swap in on failure, which negates the
whole backup of the boot sector anyways.  This last paragraph here is all
my own personal feelings about raid and boot and junk, and as with
anything else on the internet, should be taken with a grain of salt and a
lot of your own common sense :)

Eric




More information about the lfs-support mailing list