Glibc-2.2.5 compile error in chapter 6 of LFS 3.2

Bill Maltby LFS Related lfsbill at wlmlx1.wlmcs.com
Mon Mar 11 09:36:54 PST 2002


Don,

Both machines have "adequate" swap. The "biggie" has two swaps -
1 of 128MB and the secondary of 132MB. But, I never saw "excessive"
use of these. I began to wonder if it was being effectivey used.
What I saw, in retrospect, was signs of thrashing. _Very_ slow
(1 day install time before "unoptimizing") on the "biggie" machine.
After my umm... corrections, install was about 10 hours. Since I
saw performance improvement in conjunction with reliable operation
when I reduced memory requirements, I deduced kernel 2.2 swap (actually
paging) was not very good and it probably was thrashing. With all
i18n being done, it would sit for hours on the last two or three
files in glibc install.

The little machine has 64MB of swap. I still have'nt got this one
completely through. The gcc install is dying with interrupt 11.

But I am tenacious.

Bill Maltby
billm at wlmcs.com

On Mon, 11 Mar 2002, Don Smith wrote:

> "Bill Maltby LFS Related" <lfsbill at wlmlx1.wlmcs.com> wrote in message
> news:Pine.LNX.4.10.10203102108320.2632-100000 at wlmlx1.wlmcs.com...
> > Gerard and Folks,
> >
> > I've been following and finally figured maybe what you're seeing is
> > somewhat related to what I've seen. Consistently inconsistent fails
> > in the glibc install process.
> >
> > In my case, I believe I've worked around it, but maybe some of my
> > conditions apply in the questions you've been fielding.
> >
> > My BILLS semi-automated stuff is all bash. I am developing on two
> > low-resource machines. A 200MHz 64MB Pentuim-MMX, RH 6.2 and a
> > 166 MHz AMD 32MB, RH 6.0.
> >
> > Now the deal was that I was going for max performance, damn the
> > memory usage. I was suitably punished for my transgressions.
> >
> > Symptoms included random failures to install in glibc (the most
> > persistent and sever case), gcc, e2fsprogs, ncurses. The mode
> > of failure was not consistent. Sometimes syntax errors in C code,
> > sometimes C compiler failure with signal 11, sometimes C syntax
> > errors in different places.
> >
> > After eliminating the possibility of hardware error by pure
> > dint of faith, I began to suspect that my wanton use of scarce
> > resources was somehow related, even though this is not a Micro-
> > soft piece of... work. So I began "unoptimizing" some of my code
> > a little bit at a time and managed to eliminate all the failures
> > _except_ glibc's.
> >
> > Hmm... said I. I had noticed that glibc with all i18n stuff all
> > being done was a world-class hog. To clean that one up, I had
> > to absolutely inline all the code, effectively having only one
> > level deep nesting.
> >
> > Where I think this _might_ relate is this. Are the folks having
> > problems on limited resource machines? Are they running from
> > script? Are their machines in states 3 or 5? I have found that
> > all these things combined and individually seem to expose some
> > kind of weakness somewhere in the Linux/bash/make when it comes
> > to managing memory.
> >
> > Various levels of progress can be made by running single-user
> > or typing all commands manually (no bash scripts) for the heavy-
> > hitters or, as I had to do, simplifying to some level all the
> > scripts.
> >
> > Since my stuff was scripted and no changes were made to my chapter
> > 5 scripts in achieving success, I know that the static compiles
> > were done correctly. But glibc still failed until I reduced memory
> > usage.
> >
> > Now just tonight, gcc failed on the 32MB 166MHz RH 6.0 with a
> > signal 11 in chapter 6. And glibc tried to enter an infinite loop.
> >
> > But armed with my new-found knowledge, I shut down X, nfs, inetd and
> > any other non-essentials and glibc finished. I am now rerunning gcc
> > and expect success.
> >
> > I don't know if any of this is related, but the stuff I've seen
> > in the articles was beginning to look awfully familiar.
> >
> > Bill Maltby,
> > billm at wlmcs.com
> >
> 
> Something noone ever seems to mention is the amount of swap space they
> have. I had a 24MB memory machine with 128MB of swap and never ran out
> of resources. It was very slow, but it worked with no problems. How much
> swap space do you have allocated, Bill?
> 
> Just curious,
> Don
> 
> 
> 
> -- 
> Unsubscribe: send email to listar at linuxfromscratch.org
> and put 'unsubscribe lfs-support' in the subject header of the message
> 

-- 
Unsubscribe: send email to listar at linuxfromscratch.org
and put 'unsubscribe lfs-support' in the subject header of the message



More information about the lfs-support mailing list