[blfs-dev] LVM2

Qrux qrux.qed at gmail.com
Fri Feb 3 23:45:30 PST 2012


On Feb 3, 2012, at 8:20 PM, Bruce Dubbs wrote:

> Qrux wrote:
>> The instructions on the svn LVM2 page could use a bit of updating.
>> 
>> The package isn't usable (at least according to docs) without certain
>> options.
> 
> I've certainly been able to create, format, and mount an lvm partition...

Fair enough.

>> They're on the page, but they're not in the build commands.
>> Also, there's some nice info in the docs subdir of the tarball that
>> don't get installed.
>> 
>> I propose this update to the LVM2 page (including a version bump to LVM2.2.09.90):
>> 
>> ====
>> ./configure --prefix=/usr --enable-pkgconfig --enable-dmeventd --enable-cmdlib
> 
> When you do that, the only executable added is dmeventd, but the 
> executables do link to a new library.  The description of dmeventd is 
> "the event monitoring daemon for device-mapper devices.  Library plugins 
> can register and carry out actions triggered when particular events occur."
> 
> I have no idea what that means or how it would be used.  The fact that 
> it's a daemon appears to mean that a boot script needs to be written to 
> control it.  I'm reluctant to do that unless I know what it does.
> 
> If we have a writeup for a separate page "Using LVM" that explains it, 
> I'd think the change would be appropriate.

It's sort of hard to find dmeventd information.  It's buried in the default lvm.conf file.  I think the main purpose is to allow LVM to handle some errors automatically.  Most of the stuff is in the "Activation" section.  Here's one relevant example that discusses auto-rebuild of a RAID hot-spare:

    # 'raid_fault_policy' defines how a device failure in a RAID logical
    # volume is handled.  This includes logical volumes that have the following
    # segment types: raid1, raid4, raid5*, and raid6*.
    #
    # In the event of a failure, the following policies will determine what
    # actions are performed during the automated response to failures (when
    # dmeventd is monitoring the RAID logical volume) and when 'lvconvert' is
    # called manually with the options '--repair' and '--use-policies'.
    #
    # "warn"    - Use the system log to warn the user that a device in the RAID
    #             logical volume has failed.  It is left to the user to run
    #             'lvconvert --repair' manually to remove or replace the failed
    #             device.  As long as the number of failed devices does not
    #             exceed the redundancy of the logical volume (1 device for
    #             raid4/5, 2 for raid6, etc) the logical volume will remain
    #             usable.
    #
    # "allocate" - Attempt to use any extra physical volumes in the volume
    #             group as spares and replace faulty devices.
    #
    raid_fault_policy = "warn"

Not all the dmeventd stuff is strictly LVM related, though asking people to unpack LVM to install dmeventd for RAID stuff is equally...questionable.

>> LVM2_DOC_DIR=/usr/share/doc/lvm2-2.02.90
>> install -d ${LVM2_DOC_DIR}
>> cp -va doc/* ${LVM2_DOC_DIR}
>> rm -vf ${LVM2_DOC_DIR}/Makefile{,.in}
> 
> I'd think this would be better for the last two lines:
> 
>  cp -v doc/{*.txt,example*} ${LVM2_DOC_DIR}

> Although in the book, for consistency, I'd spell out 
> /usr/share/doc/lvm2-2.02.90 in both the install and cp commands.

Don't want the kernel subdir?  Maybe:

	install -d                         /usr/share/doc/lvm2-2.02.90
	cp -va doc/{kernel,*.txt,example*} /usr/share/doc/lvm2-2.02.90

is better?

	Q




More information about the blfs-dev mailing list