[blfs-dev] lvm2

Bruce Dubbs bruce.dubbs at gmail.com
Tue Aug 27 08:31:11 PDT 2013


Bruce Dubbs wrote:
> David Brodie wrote:
>> On 27/08/13 03:20, Bruce Dubbs wrote:
>>> Well, I used your input but I still couldn't get the tests to run.  They
>>> keep hanging on me, starting with the very first test.  I tried
>>> instrumenting the tests to see where it was failing, but that was
>>> inconclusive.
>>>
>>> What I did do was install lvm2 without the checks.  I then created two
>>> new partitions and was able to run pv{create,remove,display},
>>> vg{create,scan,display}, and lv{create,display,scan,extend).
>>>
>>> I was then able to format and mount a logical volume.  All looks fine.
>>>
>>
>> Perhaps it's significant that the actual tests use a loopback device,
>> rather than a physical disk partition. I'd suggest running the tests
>> with trace (set -x), especially test/lib/aux.sh, which does the
>> preparation of the loopback devices prior to creating the pv's and lg's
>> - it produces quite sensible-looking output on my system. (Needs
>> VERBOSE=1 on the make command line)
>
> Good idea.  I'll try that.

I've made some progress.  I instrumented several scripts with set -x and 
ran:

sudo make check T=lvtest VERBOSE=1

I got VERBOSE= \
cluster_LVM_TEST_LOCKING=3 \
lvmetad_LVM_TEST_LVMETAD=1 \
./lib/harness normal:api/lvtest.sh \
               cluster:api/lvtest.sh \
               lvmetad:api/lvtest.sh
Running normal:api/lvtest.sh ... 
passed.    0:01    0:00.080/0:00.120   14       0/0
Running cluster:api/lvtest.sh ... 
skipped.
Running lvmetad:api/lvtest.sh ... 
skipped.

## 3 tests  0:02 : 1 OK, 0 warnings, 0 failures, 0 known failures; 2 
skipped, 0 interrupted
skipped: cluster:api/lvtest.sh
skipped: lvmetad:api/lvtest.sh

The difference may be that I had earlier created a couple of lvm 
partitions on my sdb drive.  I'm guessing that is needed to get the 
checks to work.  I was running on a system that never had any lvm 
partitions.  Seems like a catch-22 to me.

The results do not get printed to the screen, but are captured in a file 
for each test in tests/results.

Upon examination, all the tests labeled lvmetad: and cluster: were 
skipped.  The results are:

## 417 tests  5:24 : 42 OK, 1 warnings, 51 failures, 0 known failures; 
323 skipped, 0 interrupted

Still speculating, perhaps some errors are due to not having mdadm 
installed.  Some of the failures may be due to kernel configuration.
My settings for Device Manager are:

CONFIG_MD=y
CONFIG_BLK_DEV_MD=y
CONFIG_MD_AUTODETECT=y
CONFIG_MD_LINEAR=y
CONFIG_MD_RAID0=y
CONFIG_MD_RAID1=y
CONFIG_MD_RAID10=y
CONFIG_MD_RAID456=y
CONFIG_MD_MULTIPATH=y
CONFIG_MD_FAULTY=y
# CONFIG_BCACHE is not set
CONFIG_BLK_DEV_DM=y
# CONFIG_DM_DEBUG is not set
CONFIG_DM_CRYPT=y
# CONFIG_DM_SNAPSHOT is not set
# CONFIG_DM_THIN_PROVISIONING is not set
# CONFIG_DM_CACHE is not set
# CONFIG_DM_MIRROR is not set
# CONFIG_DM_RAID is not set
# CONFIG_DM_ZERO is not set
# CONFIG_DM_MULTIPATH is not set
# CONFIG_DM_DELAY is not set
CONFIG_DM_UEVENT=y
# CONFIG_DM_FLAKEY is not set
# CONFIG_DM_VERITY is not set

If those were all set, I suspect that the number of failures/skips would 
be reduced a lot.

   -- Bruce



More information about the blfs-dev mailing list