Conrad's ALFS comments

Bill Maltby LFS Related lfsbill at wlmcs.com
Mon Aug 26 10:17:44 PDT 2002


Hi Rui,

On Mon, 26 Aug 2002, Rui Ferreira wrote:

> Hello Bill
> 
> >From: Bill Maltby LFS Related <lfsbill at wlmcs.com>
> >Subject: Re: Conrad's ALFS comments
> >
> >I reply here *only* to offer alternate considerations to the comments.
> ><snip>
> >On Mon, 26 Aug 2002, Rui Ferreira wrote:
> >
> > > Hi all,
> > > Hi Conrad.
> > >
> > > You posted your script [...] script for building an lfs system.
> > > Well, yours is pretty close, if not exactly.
> > > So, I'm going to tell you what I would change:
> > > . Put the functions on another file to help readability;
> >
> >Having the functions in the same file, collected at the top of the file,
> >allows viewing more conveniently if using a single screen. Marks can be
> >set at many places and "jumped to".
> This one as to do with the functions names I proposed. Something to make the 
> main script look more like the LFS book's table of contents.

Oh, I misunderstood. I agree that is important. In my BILLS semi-auto, I
accomplish that by making subdirs named chapter05, chapter06 and so on.

> > > . The function names are very easy to read but too long. Instead of
> > > make_lfs_command_chroot, why not just mkcommchr;
> >
> >Good programming style suggests that names be longer, not shorter, so that
> Didn't knew that. All I've learned was by my self.
> >each name is self-explanatory. This eases learning for people new to the
> >scripts. Shorter names are preferred by folks like me, K & R traditional-
> So, what's K & R? Kernnighan & Ritchie?

Yes. Although dated now, when C and UNIX popularity began to spread, they
advocated brevity for a number of reasons. Some related to the equipment
of the time and some related to most of us programmer types couldn't
really type all that fast (or well :P ). Since they were Bell Labs people,
they didn't have (m)any concerns for long-term maintenance issues, so
short names were the order or the day.

> >ists, who like to type less and are performance oriented on slower eqpt.
> >
> > > . Using lfs-commands from the site might not be an good idea because 
> >future
> > > versions might reveal incompabilitys and you'll have to deal with them 
> >by
> > > changing the functions;
> >
> >OTOH, if your goal is to remain current and you expect to keep the scripts
> >compatible with what is on the site, this is a nice convenience. You don't
> >have to manually inspect for differences and get the stuff yourself.
> Yes you have if you're using 3.3 script to build cvs. Anything might appear.

You did say "automated"? Even in my semi-manual stuff I have started down
the road to parsing the XML/HTML/TEXT and diff'ing against saved versions.
With this capability, I am told about changes in archives and scripts that
I may need to review. In a full-on auto, I would expect that the changes
might be used without manual inspection. IIRC, chris posted earlier saying
that he had change anything since 3.1.

> >
> > > . Entering chroot to execute each package build saves you from having 
> > > more than one script, but it isn't elegant(?!?). I came up with
> > > something else.
> > > <snip>
> >
> >Elegance is to be admired, but not *necessarily* pursued. *Sometimes* it
> >is a trade-off between increased complexity (and, therefore, higher
> >maintenance or execution costs) to achieve elegance, and simplification
> >for the purposes of easier maintenance, faster execution, better
> >educational value or other such considerations.
> So, what's your oppinion? Several chroot's, one stage arg. or any other 
> thing?

It depends on the goal of the effort. Let's say, for example, that the
goal is primarily educational. Then avoiding certain complexities that
may be quite elegant but difficult for a noob to understand might be the
right decision. If the goals are other than that, say maximum reuse of
code and maximum flexibility, then the decision could go the other way.

Also, there are operational considerations. For example, in the BILLS
utility, I want to be able to restart, in the worst case, at the beginning
of the chapter in which a failure occurred. So, the first thing in each
chapter 6 and after is the needed setup of mounted file systems, proc and
chroots. And, if the user desired, the install lists had some "Y"s changed
to "N"s and packages that do not need to be redone are skipped.

So, in answer - there is no answer. It is highly dependent on the primary
factors in the design to address the task desired. For efficiency only -
no more than one of everything! For flexibility, operational or other
concerns, maybe more are needed.

> ><snip>


Bill Maltby
billm at wlmcs.com


-- 
Unsubscribe: send email to listar at linuxfromscratch.org
and put 'unsubscribe alfs-discuss' in the subject header of the message



More information about the alfs-discuss mailing list