LFS-6.6, Stage2, glibc, nscd.c:442

Paul Rogers paulgrogers at fastmail.fm
Tue Jun 1 11:40:41 PDT 2010

> Others have said it: unless we can duplicate the problem somebody
> faces doing things slightly different, support can be hard to provide.

As Ken wrote and I believe, most of your development team can be
expected to stay pretty close to "front-line developments."  As seems
indicated by the current situation, someone should adopt a QC role, and
have one system that trails, i.e. has exactly the package versions
specified in the HSR, and verifies that each version of LFS does in fact
install flawlessly with those prerequisites.  In all this discussion
nobody has yet made that claim.  A point blank question then: who
installed 6.6 flawlessly with exactly the package versions given in the
Host System Requirements. i.e. gcc-3.0.1, linux-2.6.18?

> phase 2 -- building the system
>      During this phase you would actually build the system as
>      described in the LFS and BLFS books.  The time that it would take
>      to do this is dependent on the speed of your build-host as well
>      as how much you are building.

There is a very old aphorism in system development: plan to throw the
first one away.  Since the question seems to suggest nothing like this
has been attempted in the past, my humble suggestion would be to build
the first one by hand, going through the book package by package, not
jhalfs.  Get through all the packages, discovering and dealing with
the inevitable glitches along the way.  Get LFS running.  See what
it's like.  It's surprizingly Spartan!  You'll certainly want to
install some BLFS packages to make the system "livable".  See how what
you'll have satisfies your expectations.  Then, since we all usually
make some compromises the first time through, if you like what you've
made and want to continue, tarball it up, put it away, and start over
from scratch.  Plan to make your second build as perfect as possible
given what was learned the first time through.  I'd estimate this time
to be a couple weeks to a month, depending on how thorough your
explorations are.

> The Host System requirements may indeed be too low for LFS 6.6, but I
> am reluctant to change them based on your input because you have made
> lots of changes.  You claim your scripts encompass the book's
> commands, but I don't have the time or desire to check that,
> especially when we have an automated way to build.

No, I don't expect you to--that's MY responsibility.  But as I suggested
above in reply to what Gerard wrote, I expect someone to have a
"reference system" with exactly those minimum packages, and verify that
such a system WILL built LFS flawlessly.  Use jhals if you want, but
then it's YOUR responsibility to be sure that a jhalfs build corresponds
to a hand-build.  But SOMEBODY needs to check the HSR!

> If you want to help, fine.  Give us definitive reasons to change the
> book.  That includes validation of your findings using our tools.

Ummm, no, I'm not the author of the book.  I would say the only
responsibility users can be construed to have is to report problems WITH
THE BOOK--which means that they vouch for their usage of the book.  The
book tells us we can build LFS by hand from the HSR.  It seems I'm not
the first who reported this nscd problem, but all they got in return was
WFM!  But as I question, who has verified the HSR is sufficient?  I
would say what linuxfan has turned up about gcc-4.something being
required to compile the stack protector code is enough of a clue that
the book is wrong.  It's up to the author to confirm.

What I can confirm is that the "-fno-stack-protector" workaround is
insufficient.  With that, of course one must insert a glibc rebuild step
after producing a new gcc.  But on my system, with gcc-3.4.3, test has
7-8 test failures, an exact list can be provided later.

> You don't need to change what you have.  It's easy enough to clone
> directories and to do testing with that.

I can follow the book well enough.  I can even vouch for my usage.
But I can't do your job, or Gerard's.  I began programming in 1966, in
FORTRAN II, on an IBM-1620, q.v.  I've retired from a 40-year career in
computing.  I can also vouch for a diminution of skills with advancing
age.  One's short term memory goes to hell!  That's one reason I make
my scripts.

> You seem to be demanding perfection from a small group of volunteers.
> We don't claim to be perfect.  There are too many combinations to
> check old versions of the packages against the latest version.

OK, I was doing my systems development professionally.  We were
expected, were paid to be right.  It just seems painfully obvious to me
that one of the tasks of the project is not only to verify that the book
works, it builds LFS, but that it builds on what you say it builds on!
If you all don't want to do what's necessary, why are you volunteering?
The fact you're an unpaid volunteer isn't justification for doing a
dash job, is it?  Is that how you wish to be known?  My expectation,
even IMO the experience of the whole FOSS ethos, is that people who WANT
to be part of the FOSS environment WANT to bring their professionalism
to the effort.

> Where did that come from?  I don't see a reference.

Linuxfan reported it a few days back in the mailing list. 

The configuration item CONFIG_CC_STACKPROTECTOR:

    * prompt: Enable -fstack-protector buffer overflow detection
    * type: tristate
    * defined in arch/x86/Kconfig
    * found in Linux Kernels: from 2.6.19 release still available on
2.6.34 release

Help text

This option turns on the -fstack-protector GCC feature. This feature
puts, at the beginning of critical functions, a canary value on the
stack just before the return address, and validates the value just
before actually returning. Stack based buffer overflows (that need to
overwrite this return address) now also overwrite the canary, which gets
detected and the attack is then neutralized via a kernel panic.

This feature requires gcc version 4.2 or above, or a distribution gcc
with the feature backported. Older versions are automatically detected
and for those versions, this configuration option is ignored.

is not seen in any /boot/config-x.y.z (kernel config) files I have saved
until gcc became >= 4.2

"Older versions are automatically detected and for those versions, this
configuration option is ignored." [ ... thinking ...] thus would not be
a possible kernel config option.


> You don't have any standing to 'expect' anything from us.  You can
> suggest, but with your attitude, my reaction is to push back and say
> no, even if that's wrong.

Petulance?  You need ego strokes for "being helpful"?  Such is
"professionalism", eh?  People report problems and you blow them off?
Why are you involved?

So what's your argument?  "We'll put whatever we please in our book, and
whether it works for you or not is none of our concern?"  Why bother?

What do you expect of yourself?  "Roger's Second Law: Everything you do

> I suspect someday soon, someone will try making an LFS-6.3 system
> using linux- (instead of Then, using that as a
> host, try to build LFS-6.6 to verify the minimum kernel requirement.

Should have been done BEFORE publishing LFS-6.6.  But that's not good
enough, IMO.  It needs to be built with exactly the HSR's if anything is
to be established at all.

>   You got a lot more out of them on this problem then i did.
>   I think it just boils down to the fact that they've put all this
>   time and effort into producing the, always, latest version of the
>   book AND it works for them.  Bringing the book to usability must be
>   time consuming enough, doing the support work just sucks the
>   remaining life out of them.

It is a very different job, I'll give you that.  Tech-support is NOT
for everyone!

>   Most of the posters with problems do seem to have, for whatever
>   reason, strayed from the path of righteousness and not followed the
>   book closely enough.  After hearing

I don't think that's true, IN THIS CASE.  It seems the problem is in
"starting conditions".  People whose host system uses gcc-3.x apparently
can't build the proper kernel to support glibc building in Stage 2.

>   So i uttered the prayer "-lssp" on the link line for 'nscd' and it
>   was brought forth and with it its mother, Glibc-2.11.1 sprang fully

I think what I tried was about the same, "-fno-stack-protector"--since I
can remove it at will with my package manager--but it wasn't sufficient.
As already mentioned, I got 7-8 errors in the tests, and they seemed
unrelated to timing/speed issues if my extrapolation from a few
characters in the test name are a guide.  It remains to be seen if this
is just a compiler version issue, pending a recompilation after 
gcc-4.4.3 is installed.

> You have, I think, mentioned that you are using package management.

Indeed, of the "time-stamp" variety.  It's unobtrusive, doesn't "get in
the way" of whatever configure/make/make_install wants to do, and
sufficient if one allows a quiet system to do one thing at a time.

> You are also using a much older host system than most people have

A LFS-6.1 derived system, being newer than the HSR says is sufficient.

> available.  When you make your own path, nobody has any experience of
> the differences..  It might be an interesting intellectual exercise to
> tear it apart and eventually discover where the problem lies, but it's
> not a good use of our time.

It's not YOUR job, not one I've asked you to do.  It is my
responsibility to make sure what I do isn't an obstruction.  You'll have
to take my word on that, just as I've taken yours that the HSR, or
better, is all I need to install LFS-6.6.  "Mutual trust?"  Oh, horrors!
All I've asked is that the HSR you publish is in fact able to flawlessly
install LFS-6.6.  That means with gcc-3.0.1.  Let the person who has
so speak up.

>  And no, the phrase is not supposed to be original.  It's a statement
>  of the reality, as I see it.

Me too!

>  It's the versions that seemed to work when that page was last
>  revised.

And rechecking that isn't part of publishing a new version of LFS?
"Once upon a time this was good enough, but we haven't checked in some
time.  Take your chances."  Is that it?  That's how you want to be

>  The "support team" is whoever happens to chime in on a thread.  Most
>  people here are well-intentioned, and we use our own experience both
>  in building and in the problems we've seen mentioned.  I'm happy to
>  ignore your future postings in this thread if I'm not helping.

It does require a certain attitude to do tech-support.

>  Again, I'm talking about how the book is *developed*.  You might
>  think there is a long period when a version of the book is marked as
>  a release candidate, and a large QA team then looks at the wording
>  and tests it on as many old hosts as they can lay their hands on.
>  Doesn't happen like that.  At some point, an rc is made.  The people
>  who have followed the development book might review it and test it
>  on old hosts, or they might not.  We usually get a few new people
>  trying it..

And that methodology obviously isn't finding certain flaws in the book.
A new methodology is required.  What I'm suggesting is necessary is that
SOMEBODY dedicated to production has at least a part-time function as
"trailer".  (S)he maintains a reference system at the HSR's minimum
requirements, and verifies that the new version actually will install
flawlessly, as advertized, calling for updates to the HSR as

>  The 'stable' book *is* for general consumption.  Doesn't mean it's
>  perfect, doesn't mean it will work for people using old systems.

It certainly should, if they meet your HSR's.  If not, why even publish
them?  They should be as reliable as anything else in the book.

> If you build LFS, you pick up enough skills to do it again.  If you
> were using a regular distro, they would update it during the distro's
> life, and move it to  a newer release.

Indeed, I have, I do.  My regular distro IS LFS.

> Because you build it yourself, you have to make the decision about
> when to upgrade.  I suggest that a system lifetime of nearly 5
> years is "rather longer" than most people would attempt to keep a
> desktop in use.

Perhaps, but some of us don't like M$'s policy of planned obsolence.
When one is retired that is an unsupportable expense.

> >> By the way, you _do_ need to remove toolchain source directories
> >> after
> >
> > Of course. ?And I even know that WASN'T always the case in earlier
> > builds.
>  It was always intended, but probably not always mentioned.

In earlier versions binutils and something in the expect chain, IIRC,
had to be retained for subsequent packages.

> On a more serious note, we do try to address problems that come up.
> The issue is that we have seen lots of posts where the final
> resolution is "oops, it was my error, not the book's".   (Which you
> humorously noted.)

Certainly.  And that can be taken as a measure of the book's accuracy.
But that cannot be presumed.

> As far as nscd.c goes, the real problem is to determine why some
> systems appear to need -lssp and others don't.   Then a fix the glibc
> build system can be made.  Unfortunately, the only ones that can do
> that are those that can duplicate the problem.

Simple enough it seems.  Try building with your HSR reference system.
Prove it can be done.
Paul Rogers
paulgrogers at fastmail.fm
Rogers' Second Law: "Everything you do communicates."
(I do not personally endorse any additions after this line. TANSTAAFL :-)


http://www.fastmail.fm - Choose from over 50 domains or use your own

More information about the lfs-support mailing list