[RFC] The Future of the ALFS project
bdubbs at swbell.net
Tue Feb 28 09:30:19 PST 2006
George Makrydakis wrote:
> Provided there is time on my side, I am available to start coding for
> your project in C/C++/C#/Java (preference for C++ if in binary), I will
> in any case complete the bash - based "parsing" approach, for it may
> prove handy as a tool elsewhere in the project (support scripts). Also
> with it goes a work for table structures in bash.
Selecting the proper language for a project is an art. What is desired
is to make things easy to code and maintain as well as provide for
I have nothing against what George has done, but quite frankly, what he
he has done is proven that bash is a Touring Complete (TC) language.
What that means is that anything you can do in one TC language, you can
do in another. I don't see any advantages over the jhalfs methodology.
In fact, the books are written in xml using docbook. The purpose of
xslt is to transform xml into another form. That's exactly what jhalfs
Lets look at the issues.
1. Is it easy to code and maintain? Any interpreted language that is
understood by the developers is reasonably easy to code and maintain.
This would include bash, perl, php, python, and several others
languages. C, C++, Ada, FORTRAN, Objective-C, and others are compiled
languages that can be difficult for less expert users to understand and
modify. Java is an intermediate language, somewhat like the original
Pascal, in that is creates an intermediate file that is then used by a
specialized interpreter. All are are equivalent in functionality.
The only issue as I see it is whether it is desired to have transparency
in the process actually extracting the code from the xml. xsltproc does
not offer much transparency here. The only way to use it is to compare
the input and the output. The transformation of the .xsl files with
xsltproc uses a significantly different programming paradigm and is thus
not transparent to most people.
2. Does the process use excessive resources such as time, memory, disk
space, etc? nALFS requires a "profile" that is not automated. This is
not desirable. jhalfs takes a couple of minutes. Compared to the build
time of an LFS system, or even a typical BLFS application, this is
negligible. The other resources such as memory and disk space are also
negligible compared the the applications we are building and so are non
In my mind, I prefer the jhalfs process. It works now and is relatively
easy to understand and maintain. The only difficult part, the .xsl code
used to do the transformation, does not need frequent updates. The bash
scripts are easy to understand and modify and the input and output are
also easy to understand.
To me the ALFS project is essentially complete for LFS. It can be
extended to HLFS, CLFS, and BLFS using the same techniques if desired,
but it does not need the same level of effort as the other projects
where there is a constant churn of updated packages. It only needs to
be updated if there is a significant change to the LFS book structure.
More information about the alfs-discuss