LFS-6.6, Stage2, glibc, nscd.c:442

Ken Moffat zarniwhoop73 at googlemail.com
Tue Jun 1 14:47:29 PDT 2010


On 1 June 2010 19:40, Paul Rogers <paulgrogers at fastmail.fm> wrote:
>
> As Ken wrote and I believe, most of your development team can be
> expected to stay pretty close to "front-line developments."  As seems
> indicated by the current situation, someone should adopt a QC role, and
> have one system that trails, i.e. has exactly the package versions
> specified in the HSR, and verifies that each version of LFS does in fact
> install flawlessly with those prerequisites.  In all this discussion
> nobody has yet made that claim.  A point blank question then: who
> installed 6.6 flawlessly with exactly the package versions given in the
> Host System Requirements. i.e. gcc-3.0.1, linux-2.6.18?
>

 I see Bruce has now changed the versions to those he knows to work.
I'm slightly disappointed by this, because old versions were usually a
lot smaller and faster, but they're still very old, so no real worries about
treating them as minima.

 I'll repeat what I've said over the years - if you have a self-built host
system (and ideally for any installed host system, but some distro setups
can make it unnecessarily hard), the first step to installing a new system
should be to use the kernel you are intending to boot.  It gives you a
.config that you can test while you still have sufficient applications.  This
doesn't solve everything (e.g. the x86_64 gcc-4.5 -Os problem), but it
should give some confidence.  It also reduces the weird and wonderful
things that can cause toolchain tests to fail.

 So far, I've never heard anyone advocate "must build on the exact
versions" as a QC role until you did.  If we were a commercial operation,
we would probably tell you one adequate previous version from which to
build - not very useful for anyone trying LFS for the first time.

>
> OK, I was doing my systems development professionally.  We were
> expected, were paid to be right.  It just seems painfully obvious to me
> that one of the tasks of the project is not only to verify that the book
> works, it builds LFS, but that it builds on what you say it builds on!
> If you all don't want to do what's necessary, why are you volunteering?
> The fact you're an unpaid volunteer isn't justification for doing a
> slap-
> dash job, is it?  Is that how you wish to be known?  My expectation,
> even IMO the experience of the whole FOSS ethos, is that people who WANT
> to be part of the FOSS environment WANT to bring their professionalism
> to the effort.
>
 So, you subscribe to the idea that the better is the enemy of the
good-enough ?  I'm surprised you use the linux kernel.

 We do what we think is important.  Sometimes, LFS is a minor part of what
matters to us.  If you have different ideas about the way the project should
be going, you can make suggestions on -dev.  If your ideas don't get
sufficient support, you can create your own project.

 However, my personal view is that you built LFS several years ago and
have not kept in touch with what has changed.  You also seem to have a
"not important for me" attitude to updating for vulnerabilities (evidence:
your kernel version) which doesn't give me any confidence that you are
liekly to do the right thing with regard to systems that people use - it
might be an adequate view for your own system, but it smells of poor
practice.

 You also seem to think paid-for and volunteer development is similar.
It isn't.  In one, the person or organisation with the money can attempt
to make the decisions.  In the other, people have to agree - those of
us who decide we do not wish to spend our time in arguments that we
don't find useful, may decide to reduce our presence here.
>
> And rechecking that isn't part of publishing a new version of LFS?
> "Once upon a time this was good enough, but we haven't checked in some
> time.  Take your chances."  Is that it?  That's how you want to be
> known?

 If nobody is willing to test on multiple old hosts (and old distros) that
used to be adequate, I'd rather see an updated book that we know works
for most people, than one that will be released when it's passed QA but is
already long out of date.
>
>>  The "support team" is whoever happens to chime in on a thread.  Most
>>  people here are well-intentioned, and we use our own experience both
>>  in building and in the problems we've seen mentioned.  I'm happy to
>>  ignore your future postings in this thread if I'm not helping.
>
> It does require a certain attitude to do tech-support.

 Please reread my paragraph you quoted there (without attribution -
not my favourite email technique, at first I tohught you were only
replying to Bruce), particularly the last sentence.  It's very easy to
cause offence, and you seem to manage it well.
>
>>  Again, I'm talking about how the book is *developed*.  You might
>>  think there is a long period when a version of the book is marked as
>>  a release candidate, and a large QA team then looks at the wording
>>  and tests it on as many old hosts as they can lay their hands on.
>>  Doesn't happen like that.  At some point, an rc is made.  The people
>>  who have followed the development book might review it and test it
>>  on old hosts, or they might not.  We usually get a few new people
>>  trying it..
>
> And that methodology obviously isn't finding certain flaws in the book.
> A new methodology is required.  What I'm suggesting is necessary is that
> SOMEBODY dedicated to production has at least a part-time function as
> "trailer".  (S)he maintains a reference system at the HSR's minimum
> requirements, and verifies that the new version actually will install
> flawlessly, as advertized, calling for updates to the HSR as
> experienced.
>

 Objectively, this appears to be a waste of resources (the gain from
wider builds on modern hosts seems liekly to be much greater).  But if
you are happy to do it (and to maintain newer versions ready for when
older versions no longer meet the requirements), *and* to keep
them adequately updated to work around the vulnerabilities everyone
else updates for, feel free to try it.

 Realistically, anyone attempting this should be using the old systems
during the book's development, to help identify which change required
an updated host.

>> If you build LFS, you pick up enough skills to do it again.  If you
>> were using a regular distro, they would update it during the distro's
>> life, and move it to  a newer release.
>
> Indeed, I have, I do.  My regular distro IS LFS.

 Sorry, we seem to use the word "regular" with different meanings.

> Perhaps, but some of us don't like M$'s policy of planned obsolence.
> When one is retired that is an unsupportable expense.
>

 I don't have problems with the idea of trying to keep old mainstream
hardware working.  But part of the process for that is updating the
software.  M$ go with planned obsolecence to make money.  AFAIK
nobody is asking you to pay money to download and compile, beyond
the costs of the electricity you will use.

>> >> By the way, you _do_ need to remove toolchain source directories
>> >> after
>> >
>> > Of course. ?And I even know that WASN'T always the case in earlier
>> > builds.
>>
>>  It was always intended, but probably not always mentioned.
>
> In earlier versions binutils and something in the expect chain, IIRC,
> had to be retained for subsequent packages.

And the pages for the affected packages had reminders that you should
not delete them.

ĸen
-- 
After tragedy, and farce, "OMG poneys!"



More information about the lfs-support mailing list