Step 6.11 - Glibc - tst-rwlock6 error

Jon Fullmer jon at jonfullmer.com
Wed Oct 5 21:24:59 PDT 2005


I¹m building LFS 6.1 on a Power Macintosh G3 minitower (300 MHz PowerPC 750)
running Mandrake (Mandriva) 10.1 PPC.  I've gotten through the builds okay,
all the way up to step 6.11 - Glibc 2.3.4.

I did the patch.  The configure went fine.  The make went fine.  I had a
failure in the check.  Here's a clip of the output:

GCONV_PATH=/sources/glibc-build/iconvdata LC_ALL=C
/sources/glibc-build/elf/ld.so.1 --library-path
/sources/glibc-build:/sources/glibc-build/math:/sources/glibc-build/elf:/sou
rces/glibc-build/dlfcn:/sources/glibc-build/nss:/sources/glibc-build/nis:/so
urces/glibc-build/rt:/sources/glibc-build/resolv:/sources/glibc-build/crypt:
/sources/glibc-build/nptl /sources/glibc-build/nptl/tst-rwlock6  >
/sources/glibc-build/nptl/tst-rwlock6.out
make[2]: *** [/sources/glibc-build/nptl/tst-rwlock6.out] Error 1
make[2]: Leaving directory `/sources/glibc-2.3.4/nptl'
make[1]: *** [nptl/tests] Error 2
make[1]: Leaving directory `/sources/glibc-2.3.4'
make: *** [check] Error 2

I checked the tst-rwlock6.out file, and it looks like this:

1st timedwrlock done
1st timedrdlock done
2nd timedwrlock done
child calling timedrdlock
started thread
1st child timedrdlock done
2nd child timedrdlock done
joined thread
1st timedwrlock done
1st timedrdlock done
2nd timedwrlock done
child calling timedrdlock
started thread
1st child timedrdlock done
2nd child timedrdlock done
joined thread
1st timedwrlock done
1st timedrdlock done
2nd timedwrlock done
child calling timedrdlock
started thread
1st child timedrdlock done
timeout too short
failure in round 2

I tried to google AND check the LFS site to see if this is something I
should be concerned about, but I can't find anything.  What is "timeout too
short"?  Does this mean it timed out (something that the LFS book said may
happen on older hardware), or is this something more serious?

I did run another "make check", and it finished fine.  This was the only
error.

Any help would be appreciated.  Thanks!

 - Jon




More information about the lfs-support mailing list